url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
49
51
id
int64
1.36B
2.29B
node_id
stringlengths
18
19
number
int64
4.93k
6.89k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
3
milestone
dict
comments
sequencelengths
0
30
created_at
unknown
updated_at
unknown
closed_at
unknown
author_association
stringclasses
4 values
active_lock_reason
null
body
stringlengths
1
33.9k
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
null
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/5750
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5750/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5750/comments
https://api.github.com/repos/huggingface/datasets/issues/5750/events
https://github.com/huggingface/datasets/issues/5750
1,668,289,067
I_kwDODunzps5jcBIr
5,750
Fail to create datasets from a generator when using Google Big Query
{ "login": "ivanprado", "id": 895720, "node_id": "MDQ6VXNlcjg5NTcyMA==", "avatar_url": "https://avatars.githubusercontent.com/u/895720?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ivanprado", "html_url": "https://github.com/ivanprado", "followers_url": "https://api.github.com/users/ivanprado/followers", "following_url": "https://api.github.com/users/ivanprado/following{/other_user}", "gists_url": "https://api.github.com/users/ivanprado/gists{/gist_id}", "starred_url": "https://api.github.com/users/ivanprado/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ivanprado/subscriptions", "organizations_url": "https://api.github.com/users/ivanprado/orgs", "repos_url": "https://api.github.com/users/ivanprado/repos", "events_url": "https://api.github.com/users/ivanprado/events{/privacy}", "received_events_url": "https://api.github.com/users/ivanprado/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "`from_generator` expects a generator function, not a generator object, so this should work:\r\n```python\r\nfrom datasets import Dataset\r\nfrom google.cloud import bigquery\r\n\r\nclient = bigquery.Client()\r\n\r\ndef gen()\r\n # Perform a query.\r\n QUERY = (\r\n 'SELECT name FROM `bigquery-public-data.usa_names.usa_1910_2013` '\r\n 'WHERE state = \"TX\" '\r\n 'LIMIT 100')\r\n query_job = client.query(QUERY) # API request\r\n yield from query_job.result() # Waits for query to finish\r\n\r\nds = Dataset.from_generator(rows)\r\n\r\nfor r in ds:\r\n print(r)\r\n```", "@mariosasko your code was incomplete, so I tried to fix it:\r\n\r\n```py\r\nfrom datasets import Dataset\r\nfrom google.cloud import bigquery\r\n\r\nclient = bigquery.Client()\r\n\r\ndef gen():\r\n # Perform a query.\r\n QUERY = (\r\n 'SELECT name FROM `bigquery-public-data.usa_names.usa_1910_2013` '\r\n 'WHERE state = \"TX\" '\r\n 'LIMIT 100')\r\n query_job = client.query(QUERY) # API request\r\n yield from query_job.result() # Waits for query to finish\r\n\r\nds = Dataset.from_generator(gen)\r\n\r\nfor r in ds:\r\n print(r)\r\n```\r\n\r\nThe error is also present in this case:\r\n\r\n```\r\n_pickle.PicklingError: Pickling client objects is explicitly not supported.\r\nClients have non-trivial state that is local and unpickleable.\r\n```\r\n\r\nI think it doesn't matter if the generator is an object or a function. The problem is that the generator is referencing an object that is not pickable (the client in this case). ", "It does matter: this function expects a generator function, as stated in the docs.\r\n\r\nThis should work:\r\n```python\r\nfrom datasets import Dataset\r\nfrom google.cloud import bigquery\r\n\r\ndef gen():\r\n client = bigquery.Client()\r\n # Perform a query.\r\n QUERY = (\r\n 'SELECT name FROM `bigquery-public-data.usa_names.usa_1910_2013` '\r\n 'WHERE state = \"TX\" '\r\n 'LIMIT 100')\r\n query_job = client.query(QUERY) # API request\r\n yield from query_job.result() # Waits for query to finish\r\n\r\nds = Dataset.from_generator(gen)\r\n\r\nfor r in ds:\r\n print(r)\r\n```\r\n\r\nWe could allow passing non-picklable objects and use a random hash for the generated arrow file. In that case, the caching mechanism would not work, meaning repeated calls with the same set of arguments would generate new datasets instead of reusing the cached version, but this behavior is still better than raising an error.", "Thank you @mariosasko . Your last code is working indeed. Curiously, the important detail here was to wrap the client instantiation within the generator itself. If the line `client = bigquery.Client()` is moved outside, then the error is back.\r\n\r\nI see now also your point in regard to the generator being a generator function. We can close the issue if you want." ]
"2023-04-14T13:50:59"
"2023-04-17T12:20:43"
"2023-04-17T12:20:43"
NONE
null
### Describe the bug Creating a dataset from a generator using `Dataset.from_generator()` fails if the generator is the [Google Big Query Python client](https://cloud.google.com/python/docs/reference/bigquery/latest). The problem is that the Big Query client is not pickable. And the function `create_config_id` tries to get a hash of the generator by pickling it. So the following error is generated: ``` _pickle.PicklingError: Pickling client objects is explicitly not supported. Clients have non-trivial state that is local and unpickleable. ``` ### Steps to reproduce the bug 1. Install the big query client and datasets `pip install google-cloud-bigquery datasets` 2. Run the following code: ```py from datasets import Dataset from google.cloud import bigquery client = bigquery.Client() # Perform a query. QUERY = ( 'SELECT name FROM `bigquery-public-data.usa_names.usa_1910_2013` ' 'WHERE state = "TX" ' 'LIMIT 100') query_job = client.query(QUERY) # API request rows = query_job.result() # Waits for query to finish ds = Dataset.from_generator(rows) for r in ds: print(r) ``` ### Expected behavior Two options: 1. Ignore the pickle errors when computing the hash 2. Provide a scape hutch so that we can avoid calculating the hash for the generator. For example, allowing to provide a hash from the user. ### Environment info python 3.9 google-cloud-bigquery 3.9.0 datasets 2.11.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5750/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5750/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5749
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5749/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5749/comments
https://api.github.com/repos/huggingface/datasets/issues/5749/events
https://github.com/huggingface/datasets/issues/5749
1,668,016,321
I_kwDODunzps5ja-jB
5,749
AttributeError: 'Version' object has no attribute 'match'
{ "login": "gulnaz-zh", "id": 54584290, "node_id": "MDQ6VXNlcjU0NTg0Mjkw", "avatar_url": "https://avatars.githubusercontent.com/u/54584290?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gulnaz-zh", "html_url": "https://github.com/gulnaz-zh", "followers_url": "https://api.github.com/users/gulnaz-zh/followers", "following_url": "https://api.github.com/users/gulnaz-zh/following{/other_user}", "gists_url": "https://api.github.com/users/gulnaz-zh/gists{/gist_id}", "starred_url": "https://api.github.com/users/gulnaz-zh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gulnaz-zh/subscriptions", "organizations_url": "https://api.github.com/users/gulnaz-zh/orgs", "repos_url": "https://api.github.com/users/gulnaz-zh/repos", "events_url": "https://api.github.com/users/gulnaz-zh/events{/privacy}", "received_events_url": "https://api.github.com/users/gulnaz-zh/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "I got the same error, and the official website for visual genome is down. Did you solve this problem? ", "I am in the same situation now :( ", "Thanks for reporting, @gulnaz-zh.\r\n\r\nI am investigating it.", "The host server is down: https://visualgenome.org/\r\n\r\nWe are contacting the dataset authors.", "Apart form data host server being down, there is an additional issue with the `datasets` library introduced by this PR:\r\n- #5238\r\n\r\nI am working to fix it.", "PR that fixes the AttributeError: https://huggingface.co/datasets/visual_genome/discussions/2", "For the issue with their data host server being down, I have opened a discussion in the \"Community\" tab of the Hub dataset: https://huggingface.co/datasets/visual_genome/discussions/3\r\nLet's continue the discussion there.", "The authors just replied to us with their new URL: https://homes.cs.washington.edu/~ranjay/visualgenome/\r\n\r\nWe have fixed the datasets loading script, which is operative again." ]
"2023-04-14T10:48:06"
"2023-06-30T11:31:17"
"2023-04-18T12:57:08"
NONE
null
### Describe the bug When I run from datasets import load_dataset data = load_dataset("visual_genome", 'region_descriptions_v1.2.0') AttributeError: 'Version' object has no attribute 'match' ### Steps to reproduce the bug from datasets import load_dataset data = load_dataset("visual_genome", 'region_descriptions_v1.2.0') ### Expected behavior This is error trace: Downloading and preparing dataset visual_genome/region_descriptions_v1.2.0 to C:/Users/Acer/.cache/huggingface/datasets/visual_genome/region_descriptions_v1.2.0/1.2.0/136fe5b83f6691884566c5530313288171e053a3b33bfe3ea2e4c8b39abaf7f3... --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[6], line 1 ----> 1 data = load_dataset("visual_genome", 'region_descriptions_v1.2.0') File ~\.conda\envs\aai\Lib\site-packages\datasets\load.py:1791, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs) 1788 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES 1790 # Download and prepare data -> 1791 builder_instance.download_and_prepare( 1792 download_config=download_config, 1793 download_mode=download_mode, 1794 verification_mode=verification_mode, 1795 try_from_hf_gcs=try_from_hf_gcs, 1796 num_proc=num_proc, 1797 storage_options=storage_options, 1798 ) 1800 # Build dataset for splits 1801 keep_in_memory = ( 1802 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) 1803 ) File ~\.conda\envs\aai\Lib\site-packages\datasets\builder.py:891, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) 889 if num_proc is not None: 890 prepare_split_kwargs["num_proc"] = num_proc --> 891 self._download_and_prepare( 892 dl_manager=dl_manager, 893 verification_mode=verification_mode, 894 **prepare_split_kwargs, 895 **download_and_prepare_kwargs, 896 ) 897 # Sync info 898 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values()) File ~\.conda\envs\aai\Lib\site-packages\datasets\builder.py:1651, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs) 1650 def _download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs): -> 1651 super()._download_and_prepare( 1652 dl_manager, 1653 verification_mode, 1654 check_duplicate_keys=verification_mode == VerificationMode.BASIC_CHECKS 1655 or verification_mode == VerificationMode.ALL_CHECKS, 1656 **prepare_splits_kwargs, 1657 ) File ~\.conda\envs\aai\Lib\site-packages\datasets\builder.py:964, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs) 962 split_dict = SplitDict(dataset_name=self.name) 963 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 964 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 966 # Checksums verification 967 if verification_mode == VerificationMode.ALL_CHECKS and dl_manager.record_checksums: File ~\.cache\huggingface\modules\datasets_modules\datasets\visual_genome\136fe5b83f6691884566c5530313288171e053a3b33bfe3ea2e4c8b39abaf7f3\visual_genome.py:377, in VisualGenome._split_generators(self, dl_manager) 375 def _split_generators(self, dl_manager): 376 # Download image meta datas. --> 377 image_metadatas_dir = dl_manager.download_and_extract(self.config.image_metadata_url) 378 image_metadatas_file = os.path.join( 379 image_metadatas_dir, _get_decompressed_filename_from_url(self.config.image_metadata_url) 380 ) 382 # Download annotations File ~\.cache\huggingface\modules\datasets_modules\datasets\visual_genome\136fe5b83f6691884566c5530313288171e053a3b33bfe3ea2e4c8b39abaf7f3\visual_genome.py:328, in VisualGenomeConfig.image_metadata_url(self) 326 @property 327 def image_metadata_url(self): --> 328 if not self.version.match(_LATEST_VERSIONS["image_metadata"]): 329 logger.warning( 330 f"Latest image metadata version is {_LATEST_VERSIONS['image_metadata']}. Trying to generate a dataset of version: {self.version}. Please double check that image data are unchanged between the two versions." 331 ) 332 return f"{_BASE_ANNOTATION_URL}/image_data.json.zip" ### Environment info datasets 2.11.0 python 3.11.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5749/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5749/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5748
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5748/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5748/comments
https://api.github.com/repos/huggingface/datasets/issues/5748/events
https://github.com/huggingface/datasets/pull/5748
1,667,517,024
PR_kwDODunzps5OSgNH
5,748
[BUG FIX] Issue 5739
{ "login": "ericxsun", "id": 1772912, "node_id": "MDQ6VXNlcjE3NzI5MTI=", "avatar_url": "https://avatars.githubusercontent.com/u/1772912?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ericxsun", "html_url": "https://github.com/ericxsun", "followers_url": "https://api.github.com/users/ericxsun/followers", "following_url": "https://api.github.com/users/ericxsun/following{/other_user}", "gists_url": "https://api.github.com/users/ericxsun/gists{/gist_id}", "starred_url": "https://api.github.com/users/ericxsun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ericxsun/subscriptions", "organizations_url": "https://api.github.com/users/ericxsun/orgs", "repos_url": "https://api.github.com/users/ericxsun/repos", "events_url": "https://api.github.com/users/ericxsun/events{/privacy}", "received_events_url": "https://api.github.com/users/ericxsun/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
"2023-04-14T05:07:31"
"2023-04-14T05:07:31"
null
NONE
null
A fix for https://github.com/huggingface/datasets/issues/5739
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5748/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5748/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5748", "html_url": "https://github.com/huggingface/datasets/pull/5748", "diff_url": "https://github.com/huggingface/datasets/pull/5748.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5748.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/5747
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5747/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5747/comments
https://api.github.com/repos/huggingface/datasets/issues/5747/events
https://github.com/huggingface/datasets/pull/5747
1,667,270,412
PR_kwDODunzps5ORtBF
5,747
[WIP] Add Dataset.to_spark
{ "login": "maddiedawson", "id": 106995444, "node_id": "U_kgDOBmCe9A", "avatar_url": "https://avatars.githubusercontent.com/u/106995444?v=4", "gravatar_id": "", "url": "https://api.github.com/users/maddiedawson", "html_url": "https://github.com/maddiedawson", "followers_url": "https://api.github.com/users/maddiedawson/followers", "following_url": "https://api.github.com/users/maddiedawson/following{/other_user}", "gists_url": "https://api.github.com/users/maddiedawson/gists{/gist_id}", "starred_url": "https://api.github.com/users/maddiedawson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/maddiedawson/subscriptions", "organizations_url": "https://api.github.com/users/maddiedawson/orgs", "repos_url": "https://api.github.com/users/maddiedawson/repos", "events_url": "https://api.github.com/users/maddiedawson/events{/privacy}", "received_events_url": "https://api.github.com/users/maddiedawson/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2023-04-13T23:20:03"
"2024-01-08T18:31:50"
"2024-01-08T18:31:50"
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5747/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5747/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5747", "html_url": "https://github.com/huggingface/datasets/pull/5747", "diff_url": "https://github.com/huggingface/datasets/pull/5747.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5747.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/5746
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5746/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5746/comments
https://api.github.com/repos/huggingface/datasets/issues/5746/events
https://github.com/huggingface/datasets/pull/5746
1,667,102,459
PR_kwDODunzps5ORIUU
5,746
Fix link in docs
{ "login": "bbbxyz", "id": 7485661, "node_id": "MDQ6VXNlcjc0ODU2NjE=", "avatar_url": "https://avatars.githubusercontent.com/u/7485661?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bbbxyz", "html_url": "https://github.com/bbbxyz", "followers_url": "https://api.github.com/users/bbbxyz/followers", "following_url": "https://api.github.com/users/bbbxyz/following{/other_user}", "gists_url": "https://api.github.com/users/bbbxyz/gists{/gist_id}", "starred_url": "https://api.github.com/users/bbbxyz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bbbxyz/subscriptions", "organizations_url": "https://api.github.com/users/bbbxyz/orgs", "repos_url": "https://api.github.com/users/bbbxyz/repos", "events_url": "https://api.github.com/users/bbbxyz/events{/privacy}", "received_events_url": "https://api.github.com/users/bbbxyz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006461 / 0.011353 (-0.004892) | 0.004671 / 0.011008 (-0.006337) | 0.097329 / 0.038508 (0.058821) | 0.028380 / 0.023109 (0.005270) | 0.369892 / 0.275898 (0.093994) | 0.398244 / 0.323480 (0.074764) | 0.004795 / 0.007986 (-0.003190) | 0.004866 / 0.004328 (0.000538) | 0.075060 / 0.004250 (0.070809) | 0.035678 / 0.037052 (-0.001374) | 0.372197 / 0.258489 (0.113708) | 0.407509 / 0.293841 (0.113668) | 0.031557 / 0.128546 (-0.096989) | 0.011608 / 0.075646 (-0.064038) | 0.325467 / 0.419271 (-0.093805) | 0.042590 / 0.043533 (-0.000943) | 0.373738 / 0.255139 (0.118599) | 0.395793 / 0.283200 (0.112593) | 0.082335 / 0.141683 (-0.059348) | 1.471582 / 1.452155 (0.019427) | 1.535834 / 1.492716 (0.043117) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.192432 / 0.018006 (0.174426) | 0.404423 / 0.000490 (0.403933) | 0.003252 / 0.000200 (0.003052) | 0.000073 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025312 / 0.037411 (-0.012099) | 0.099964 / 0.014526 (0.085438) | 0.108779 / 0.176557 (-0.067777) | 0.170438 / 0.737135 (-0.566697) | 0.110116 / 0.296338 (-0.186223) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420402 / 0.215209 (0.205193) | 4.179142 / 2.077655 (2.101487) | 1.858114 / 1.504120 (0.353994) | 1.674452 / 1.541195 (0.133257) | 1.697839 / 1.468490 (0.229349) | 0.694707 / 4.584777 (-3.890070) | 3.394321 / 3.745712 (-0.351391) | 1.918437 / 5.269862 (-3.351425) | 1.277954 / 4.565676 (-3.287723) | 0.082357 / 0.424275 (-0.341918) | 0.012206 / 0.007607 (0.004598) | 0.522093 / 0.226044 (0.296049) | 5.239604 / 2.268929 (2.970675) | 2.347764 / 55.444624 (-53.096860) | 1.996864 / 6.876477 (-4.879613) | 2.050820 / 2.142072 (-0.091253) | 0.806110 / 4.805227 (-3.999118) | 0.151061 / 6.500664 (-6.349603) | 0.066438 / 0.075469 (-0.009031) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.211233 / 1.841788 (-0.630554) | 14.054422 / 8.074308 (5.980114) | 14.110141 / 10.191392 (3.918749) | 0.129962 / 0.680424 (-0.550462) | 0.017271 / 0.534201 (-0.516930) | 0.386410 / 0.579283 (-0.192873) | 0.392648 / 0.434364 (-0.041716) | 0.444940 / 0.540337 (-0.095398) | 0.533535 / 1.386936 (-0.853401) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006865 / 0.011353 (-0.004488) | 0.004662 / 0.011008 (-0.006346) | 0.077837 / 0.038508 (0.039329) | 0.028258 / 0.023109 (0.005149) | 0.346136 / 0.275898 (0.070238) | 0.380414 / 0.323480 (0.056934) | 0.005039 / 0.007986 (-0.002947) | 0.004967 / 0.004328 (0.000638) | 0.077774 / 0.004250 (0.073523) | 0.037504 / 0.037052 (0.000452) | 0.341550 / 0.258489 (0.083061) | 0.382494 / 0.293841 (0.088653) | 0.031881 / 0.128546 (-0.096665) | 0.011746 / 0.075646 (-0.063901) | 0.087087 / 0.419271 (-0.332185) | 0.043108 / 0.043533 (-0.000425) | 0.344103 / 0.255139 (0.088964) | 0.366613 / 0.283200 (0.083413) | 0.090399 / 0.141683 (-0.051284) | 1.492675 / 1.452155 (0.040520) | 1.588666 / 1.492716 (0.095950) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.191859 / 0.018006 (0.173853) | 0.412514 / 0.000490 (0.412025) | 0.001953 / 0.000200 (0.001753) | 0.000084 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025159 / 0.037411 (-0.012252) | 0.100125 / 0.014526 (0.085599) | 0.106000 / 0.176557 (-0.070556) | 0.160710 / 0.737135 (-0.576425) | 0.110449 / 0.296338 (-0.185889) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436636 / 0.215209 (0.221427) | 4.364597 / 2.077655 (2.286942) | 2.077492 / 1.504120 (0.573372) | 1.868248 / 1.541195 (0.327053) | 1.911218 / 1.468490 (0.442728) | 0.700306 / 4.584777 (-3.884471) | 3.385428 / 3.745712 (-0.360284) | 2.965384 / 5.269862 (-2.304478) | 1.522093 / 4.565676 (-3.043583) | 0.082805 / 0.424275 (-0.341470) | 0.012432 / 0.007607 (0.004825) | 0.538478 / 0.226044 (0.312433) | 5.383207 / 2.268929 (3.114278) | 2.525177 / 55.444624 (-52.919447) | 2.179632 / 6.876477 (-4.696845) | 2.280768 / 2.142072 (0.138695) | 0.805869 / 4.805227 (-3.999358) | 0.152716 / 6.500664 (-6.347948) | 0.067848 / 0.075469 (-0.007621) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.318899 / 1.841788 (-0.522889) | 14.416310 / 8.074308 (6.342002) | 14.172804 / 10.191392 (3.981412) | 0.141729 / 0.680424 (-0.538695) | 0.016785 / 0.534201 (-0.517416) | 0.378626 / 0.579283 (-0.200657) | 0.387153 / 0.434364 (-0.047211) | 0.439950 / 0.540337 (-0.100388) | 0.523958 / 1.386936 (-0.862978) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7c3a9b057c476c40d157bd7a5d57f49066239df0 \"CML watermark\")\n" ]
"2023-04-13T20:45:19"
"2023-04-14T13:15:38"
"2023-04-14T13:08:42"
CONTRIBUTOR
null
Fixes a broken link in the use_with_pytorch docs
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5746/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5746/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5746", "html_url": "https://github.com/huggingface/datasets/pull/5746", "diff_url": "https://github.com/huggingface/datasets/pull/5746.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5746.patch", "merged_at": "2023-04-14T13:08:42" }
true
https://api.github.com/repos/huggingface/datasets/issues/5745
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5745/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5745/comments
https://api.github.com/repos/huggingface/datasets/issues/5745/events
https://github.com/huggingface/datasets/pull/5745
1,667,086,143
PR_kwDODunzps5ORE2n
5,745
[BUG FIX] Issue 5744
{ "login": "keyboardAnt", "id": 15572698, "node_id": "MDQ6VXNlcjE1NTcyNjk4", "avatar_url": "https://avatars.githubusercontent.com/u/15572698?v=4", "gravatar_id": "", "url": "https://api.github.com/users/keyboardAnt", "html_url": "https://github.com/keyboardAnt", "followers_url": "https://api.github.com/users/keyboardAnt/followers", "following_url": "https://api.github.com/users/keyboardAnt/following{/other_user}", "gists_url": "https://api.github.com/users/keyboardAnt/gists{/gist_id}", "starred_url": "https://api.github.com/users/keyboardAnt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/keyboardAnt/subscriptions", "organizations_url": "https://api.github.com/users/keyboardAnt/orgs", "repos_url": "https://api.github.com/users/keyboardAnt/repos", "events_url": "https://api.github.com/users/keyboardAnt/events{/privacy}", "received_events_url": "https://api.github.com/users/keyboardAnt/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Have met the same problem with datasets==2.8.0, pandas==2.0.0. It could be solved by installing the latest version of datasets or using datasets==2.8.0, pandas==1.5.3.", "Pandas 2.0.0 has removed support to passing `mangle_dupe_cols`.\r\n\r\nHowever, our `datasets` library does not use this parameter: it only passes it to pandas if the user passes it to `load_dataset`.\r\n\r\nYou should better:\r\n- Either \"take steps to stop the use of 'mangle_dupe_cols'\" (as it was suggested in the deprecation warning in pandas-1.5.3)\r\n- Or pin pandas (< 2.0.0) in your local requirements file\r\n\r\nPlease note that from `datasets` library, we don't want to force users to use a specific pandas version. We would like to support users as well:\r\n- that use pandas < 1.5.3\r\n- that use pandas >= 2.0.0 and that do not pass the 'mangle_dupe_cols' parameter", "`datasets` 2.11 doesn't pass `mangle_dupe_cols` unless the user specifies it indeed, so I think we're fine" ]
"2023-04-13T20:29:55"
"2023-04-21T15:22:43"
null
NONE
null
A temporal fix for https://github.com/huggingface/datasets/issues/5744.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5745/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5745/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5745", "html_url": "https://github.com/huggingface/datasets/pull/5745", "diff_url": "https://github.com/huggingface/datasets/pull/5745.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5745.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/5744
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5744/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5744/comments
https://api.github.com/repos/huggingface/datasets/issues/5744/events
https://github.com/huggingface/datasets/issues/5744
1,667,076,620
I_kwDODunzps5jXZIM
5,744
[BUG] With Pandas 2.0.0, `load_dataset` raises `TypeError: read_csv() got an unexpected keyword argument 'mangle_dupe_cols'`
{ "login": "keyboardAnt", "id": 15572698, "node_id": "MDQ6VXNlcjE1NTcyNjk4", "avatar_url": "https://avatars.githubusercontent.com/u/15572698?v=4", "gravatar_id": "", "url": "https://api.github.com/users/keyboardAnt", "html_url": "https://github.com/keyboardAnt", "followers_url": "https://api.github.com/users/keyboardAnt/followers", "following_url": "https://api.github.com/users/keyboardAnt/following{/other_user}", "gists_url": "https://api.github.com/users/keyboardAnt/gists{/gist_id}", "starred_url": "https://api.github.com/users/keyboardAnt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/keyboardAnt/subscriptions", "organizations_url": "https://api.github.com/users/keyboardAnt/orgs", "repos_url": "https://api.github.com/users/keyboardAnt/repos", "events_url": "https://api.github.com/users/keyboardAnt/events{/privacy}", "received_events_url": "https://api.github.com/users/keyboardAnt/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting, @keyboardAnt.\r\n\r\nWe haven't noticed any crash in our CI tests. Could you please indicate specifically the `load_dataset` command that crashes in your side, so that we can reproduce it?", "This has been fixed in `datasets` 2.11", "I am still getting this bug with the latest pandas and datasets lib installed. Anyone else?\r\n```\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"csv\", data_files={\"train\":\"/kaggle/working/train.csv\", \"test\":\"/kaggle/working/test.csv\"})\r\nprint(dataset)\r\n\r\n\r\n\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\nCell In[5], line 3\r\n 1 from datasets import load_dataset\r\n----> 3 dataset = load_dataset(\"csv\", data_files={\"train\":\"/kaggle/working/train.csv\", \"test\":\"/kaggle/working/test.csv\"})\r\n 4 print(dataset)\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/datasets/load.py:1691, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)\r\n 1688 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES\r\n 1690 # Download and prepare data\r\n-> 1691 builder_instance.download_and_prepare(\r\n 1692 download_config=download_config,\r\n 1693 download_mode=download_mode,\r\n 1694 ignore_verifications=ignore_verifications,\r\n 1695 try_from_hf_gcs=try_from_hf_gcs,\r\n 1696 use_auth_token=use_auth_token,\r\n 1697 )\r\n 1699 # Build dataset for splits\r\n 1700 keep_in_memory = (\r\n 1701 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)\r\n 1702 )\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/datasets/builder.py:605, in DatasetBuilder.download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)\r\n 603 logger.warning(\"HF google storage unreachable. Downloading and preparing it from source\")\r\n 604 if not downloaded_from_gcs:\r\n--> 605 self._download_and_prepare(\r\n 606 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 607 )\r\n 608 # Sync info\r\n 609 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/datasets/builder.py:694, in DatasetBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 690 split_dict.add(split_generator.split_info)\r\n 692 try:\r\n 693 # Prepare split will record examples associated to the split\r\n--> 694 self._prepare_split(split_generator, **prepare_split_kwargs)\r\n 695 except OSError as e:\r\n 696 raise OSError(\r\n 697 \"Cannot find data file. \"\r\n 698 + (self.manual_download_instructions or \"\")\r\n 699 + \"\\nOriginal error:\\n\"\r\n 700 + str(e)\r\n 701 ) from None\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/datasets/builder.py:1151, in ArrowBasedBuilder._prepare_split(self, split_generator)\r\n 1149 generator = self._generate_tables(**split_generator.gen_kwargs)\r\n 1150 with ArrowWriter(features=self.info.features, path=fpath) as writer:\r\n-> 1151 for key, table in logging.tqdm(\r\n 1152 generator, unit=\" tables\", leave=False, disable=True # not logging.is_progress_bar_enabled()\r\n 1153 ):\r\n 1154 writer.write_table(table)\r\n 1155 num_examples, num_bytes = writer.finalize()\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/tqdm/notebook.py:249, in tqdm_notebook.__iter__(self)\r\n 247 try:\r\n 248 it = super(tqdm_notebook, self).__iter__()\r\n--> 249 for obj in it:\r\n 250 # return super(tqdm...) will not catch exception\r\n 251 yield obj\r\n 252 # NB: except ... [ as ...] breaks IPython async KeyboardInterrupt\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/tqdm/std.py:1170, in tqdm.__iter__(self)\r\n 1167 # If the bar is disabled, then just walk the iterable\r\n 1168 # (note: keep this check outside the loop for performance)\r\n 1169 if self.disable:\r\n-> 1170 for obj in iterable:\r\n 1171 yield obj\r\n 1172 return\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/datasets/packaged_modules/csv/csv.py:154, in Csv._generate_tables(self, files)\r\n 152 dtype = {name: dtype.to_pandas_dtype() for name, dtype in zip(schema.names, schema.types)} if schema else None\r\n 153 for file_idx, file in enumerate(files):\r\n--> 154 csv_file_reader = pd.read_csv(file, iterator=True, dtype=dtype, **self.config.read_csv_kwargs)\r\n 155 try:\r\n 156 for batch_idx, df in enumerate(csv_file_reader):\r\n\r\nTypeError: read_csv() got an unexpected keyword argument 'mangle_dupe_cols'```", "Feel free to update `datasets` to fix this issue\r\n\r\n```\r\npip install -U datasets\r\n```", "I am still having the same issue with the version >= 2.14", "Edit: Sorry, I found that our version is 2.2.1. Please ignore the following comment. This issue was already solved by this line:\r\nhttps://github.com/huggingface/datasets/blob/bf02cff8d70180a9e89328961ded9e3d8510fd22/src/datasets/packaged_modules/csv/csv.py#L18\r\n\r\n> This issue still exists as you can see in version 2.14:\r\n> https://github.com/huggingface/datasets/blob/bf02cff8d70180a9e89328961ded9e3d8510fd22/src/datasets/packaged_modules/csv/csv.py#L35\r\n> https://github.com/huggingface/datasets/blob/bf02cff8d70180a9e89328961ded9e3d8510fd22/src/datasets/packaged_modules/csv/csv.py#L84\r\n> that \"mangle_dupe_cols\" still exists in the arguments.\r\n> \r\n> And this error occurs at this line:\r\n> https://github.com/huggingface/datasets/blob/bf02cff8d70180a9e89328961ded9e3d8510fd22/src/datasets/packaged_modules/csv/csv.py#L185\r\n> where\r\n> ```python\r\n> file == '~/llama/llama-recipes/recipes/finetuning/gtrain_10k.csv'\r\n> dtype == None\r\n> self.config.pd_read_csv_kwargs == {\r\n> \"sep\": \",\",\r\n> \"header\": \"infer\",\r\n> \"index_col\": None,\r\n> \"usecols\": None,\r\n> \"mangle_dupe_cols\": True,\r\n> \"engine\": None,\r\n> \"true_values\": None,\r\n> \"false_values\": None,\r\n> \"skipinitialspace\": False,\r\n> \"skiprows\": None,\r\n> \"nrows\": None,\r\n> \"na_values\": None,\r\n> \"keep_default_na\": True,\r\n> \"na_filter\": True,\r\n> \"verbose\": False,\r\n> \"skip_blank_lines\": True,\r\n> \"thousands\": None,\r\n> \"decimal\": \".\",\r\n> \"lineterminator\": None,\r\n> \"quotechar\": '\"',\r\n> \"quoting\": 0,\r\n> \"escapechar\": None,\r\n> \"comment\": None,\r\n> \"encoding\": None,\r\n> \"dialect\": None,\r\n> \"skipfooter\": 0,\r\n> \"doublequote\": True,\r\n> \"memory_map\": False,\r\n> \"float_precision\": None,\r\n> \"chunksize\": 10000,\r\n> }\r\n> ```\r\n> for me.\r\n> \r\n> Here is where we got the error: https://github.com/meta-llama/llama-recipes/issues/426" ]
"2023-04-13T20:21:28"
"2024-04-09T16:13:59"
"2023-07-06T17:01:59"
NONE
null
The `load_dataset` function with Pandas `1.5.3` has no issue (just a FutureWarning) but crashes with Pandas `2.0.0`. For your convenience, I opened a draft Pull Request to fix it quickly: https://github.com/huggingface/datasets/pull/5745 --- * The FutureWarning mentioned above: ``` FutureWarning: the 'mangle_dupe_cols' keyword is deprecated and will be removed in a future version. Please take steps to stop the use of 'mangle_dupe_cols' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5744/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5744/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5743
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5743/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5743/comments
https://api.github.com/repos/huggingface/datasets/issues/5743/events
https://github.com/huggingface/datasets/issues/5743
1,666,843,832
I_kwDODunzps5jWgS4
5,743
dataclass.py in virtual environment is overriding the stdlib module "dataclasses"
{ "login": "syedabdullahhassan", "id": 71216295, "node_id": "MDQ6VXNlcjcxMjE2Mjk1", "avatar_url": "https://avatars.githubusercontent.com/u/71216295?v=4", "gravatar_id": "", "url": "https://api.github.com/users/syedabdullahhassan", "html_url": "https://github.com/syedabdullahhassan", "followers_url": "https://api.github.com/users/syedabdullahhassan/followers", "following_url": "https://api.github.com/users/syedabdullahhassan/following{/other_user}", "gists_url": "https://api.github.com/users/syedabdullahhassan/gists{/gist_id}", "starred_url": "https://api.github.com/users/syedabdullahhassan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/syedabdullahhassan/subscriptions", "organizations_url": "https://api.github.com/users/syedabdullahhassan/orgs", "repos_url": "https://api.github.com/users/syedabdullahhassan/repos", "events_url": "https://api.github.com/users/syedabdullahhassan/events{/privacy}", "received_events_url": "https://api.github.com/users/syedabdullahhassan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "We no longer depend on `dataclasses` (for almost a year), so I don't think our package is the problematic one. \r\n\r\nI think it makes more sense to raise this issue in the `dataclasses` repo: https://github.com/ericvsmith/dataclasses." ]
"2023-04-13T17:28:33"
"2023-04-17T12:23:18"
"2023-04-17T12:23:18"
NONE
null
### Describe the bug "e:\Krish_naik\FSDSRegression\venv\Lib\dataclasses.py" is overriding the stdlib module "dataclasses" ### Steps to reproduce the bug module issue ### Expected behavior overriding the stdlib module "dataclasses" ### Environment info VS code
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5743/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5743/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5742
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5742/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5742/comments
https://api.github.com/repos/huggingface/datasets/issues/5742/events
https://github.com/huggingface/datasets/pull/5742
1,666,209,738
PR_kwDODunzps5OOH-W
5,742
Warning specifying future change in to_tf_dataset behaviour
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006693 / 0.011353 (-0.004660) | 0.004586 / 0.011008 (-0.006422) | 0.097238 / 0.038508 (0.058730) | 0.027912 / 0.023109 (0.004802) | 0.347339 / 0.275898 (0.071441) | 0.393847 / 0.323480 (0.070368) | 0.005105 / 0.007986 (-0.002880) | 0.004750 / 0.004328 (0.000422) | 0.074671 / 0.004250 (0.070421) | 0.037912 / 0.037052 (0.000860) | 0.368973 / 0.258489 (0.110483) | 0.403983 / 0.293841 (0.110142) | 0.030817 / 0.128546 (-0.097730) | 0.011813 / 0.075646 (-0.063833) | 0.324470 / 0.419271 (-0.094802) | 0.044232 / 0.043533 (0.000699) | 0.347623 / 0.255139 (0.092484) | 0.382458 / 0.283200 (0.099259) | 0.086603 / 0.141683 (-0.055080) | 1.485778 / 1.452155 (0.033623) | 1.549776 / 1.492716 (0.057059) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.200154 / 0.018006 (0.182147) | 0.440645 / 0.000490 (0.440155) | 0.003664 / 0.000200 (0.003464) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023635 / 0.037411 (-0.013776) | 0.094969 / 0.014526 (0.080443) | 0.103630 / 0.176557 (-0.072927) | 0.168655 / 0.737135 (-0.568480) | 0.105850 / 0.296338 (-0.190488) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.425224 / 0.215209 (0.210015) | 4.236618 / 2.077655 (2.158963) | 1.917091 / 1.504120 (0.412971) | 1.746984 / 1.541195 (0.205789) | 1.817766 / 1.468490 (0.349276) | 0.700989 / 4.584777 (-3.883788) | 3.412577 / 3.745712 (-0.333135) | 3.049311 / 5.269862 (-2.220551) | 1.607692 / 4.565676 (-2.957984) | 0.083410 / 0.424275 (-0.340865) | 0.012601 / 0.007607 (0.004994) | 0.528244 / 0.226044 (0.302200) | 5.284134 / 2.268929 (3.015206) | 2.391885 / 55.444624 (-53.052740) | 2.020018 / 6.876477 (-4.856459) | 2.105908 / 2.142072 (-0.036164) | 0.801262 / 4.805227 (-4.003965) | 0.151467 / 6.500664 (-6.349197) | 0.066529 / 0.075469 (-0.008940) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.203894 / 1.841788 (-0.637894) | 13.827561 / 8.074308 (5.753253) | 14.136730 / 10.191392 (3.945338) | 0.143829 / 0.680424 (-0.536595) | 0.016410 / 0.534201 (-0.517791) | 0.378194 / 0.579283 (-0.201089) | 0.391235 / 0.434364 (-0.043129) | 0.439261 / 0.540337 (-0.101076) | 0.527181 / 1.386936 (-0.859755) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006639 / 0.011353 (-0.004714) | 0.004469 / 0.011008 (-0.006540) | 0.076495 / 0.038508 (0.037987) | 0.027880 / 0.023109 (0.004771) | 0.342807 / 0.275898 (0.066909) | 0.374258 / 0.323480 (0.050778) | 0.005543 / 0.007986 (-0.002443) | 0.003362 / 0.004328 (-0.000966) | 0.075064 / 0.004250 (0.070813) | 0.039209 / 0.037052 (0.002156) | 0.342490 / 0.258489 (0.084001) | 0.382135 / 0.293841 (0.088294) | 0.030356 / 0.128546 (-0.098191) | 0.011762 / 0.075646 (-0.063884) | 0.086031 / 0.419271 (-0.333241) | 0.041991 / 0.043533 (-0.001542) | 0.340323 / 0.255139 (0.085184) | 0.364160 / 0.283200 (0.080961) | 0.088483 / 0.141683 (-0.053200) | 1.502836 / 1.452155 (0.050681) | 1.570438 / 1.492716 (0.077722) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.218486 / 0.018006 (0.200480) | 0.405251 / 0.000490 (0.404761) | 0.000398 / 0.000200 (0.000198) | 0.000062 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025738 / 0.037411 (-0.011673) | 0.100390 / 0.014526 (0.085864) | 0.109913 / 0.176557 (-0.066644) | 0.161310 / 0.737135 (-0.575826) | 0.113269 / 0.296338 (-0.183069) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.438083 / 0.215209 (0.222874) | 4.377742 / 2.077655 (2.300087) | 2.069949 / 1.504120 (0.565829) | 1.857807 / 1.541195 (0.316613) | 1.881315 / 1.468490 (0.412825) | 0.695373 / 4.584777 (-3.889404) | 3.440287 / 3.745712 (-0.305425) | 1.842888 / 5.269862 (-3.426973) | 1.146655 / 4.565676 (-3.419022) | 0.083386 / 0.424275 (-0.340889) | 0.012290 / 0.007607 (0.004683) | 0.545672 / 0.226044 (0.319628) | 5.469568 / 2.268929 (3.200639) | 2.511886 / 55.444624 (-52.932739) | 2.184210 / 6.876477 (-4.692267) | 2.329822 / 2.142072 (0.187749) | 0.804114 / 4.805227 (-4.001114) | 0.151651 / 6.500664 (-6.349013) | 0.067269 / 0.075469 (-0.008200) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.272564 / 1.841788 (-0.569223) | 14.180708 / 8.074308 (6.106400) | 14.181657 / 10.191392 (3.990265) | 0.131443 / 0.680424 (-0.548981) | 0.016513 / 0.534201 (-0.517688) | 0.383786 / 0.579283 (-0.195497) | 0.397678 / 0.434364 (-0.036686) | 0.447003 / 0.540337 (-0.093334) | 0.539453 / 1.386936 (-0.847483) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#649d5a3315f9e7666713b6affe318ee00c7163a0 \"CML watermark\")\n" ]
"2023-04-13T11:10:00"
"2023-04-21T13:18:14"
"2023-04-21T13:11:09"
CONTRIBUTOR
null
Warning specifying future changes happening to `to_tf_dataset` behaviour when #5602 is merged in
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5742/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5742/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5742", "html_url": "https://github.com/huggingface/datasets/pull/5742", "diff_url": "https://github.com/huggingface/datasets/pull/5742.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5742.patch", "merged_at": "2023-04-21T13:11:09" }
true
https://api.github.com/repos/huggingface/datasets/issues/5741
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5741/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5741/comments
https://api.github.com/repos/huggingface/datasets/issues/5741/events
https://github.com/huggingface/datasets/pull/5741
1,665,860,919
PR_kwDODunzps5OM9nZ
5,741
Fix CI warnings
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007448 / 0.011353 (-0.003905) | 0.005182 / 0.011008 (-0.005826) | 0.098718 / 0.038508 (0.060210) | 0.034594 / 0.023109 (0.011485) | 0.317301 / 0.275898 (0.041403) | 0.357800 / 0.323480 (0.034320) | 0.005860 / 0.007986 (-0.002126) | 0.004267 / 0.004328 (-0.000061) | 0.074876 / 0.004250 (0.070626) | 0.048002 / 0.037052 (0.010950) | 0.333360 / 0.258489 (0.074871) | 0.362080 / 0.293841 (0.068239) | 0.035957 / 0.128546 (-0.092589) | 0.012245 / 0.075646 (-0.063401) | 0.332970 / 0.419271 (-0.086301) | 0.050825 / 0.043533 (0.007293) | 0.313936 / 0.255139 (0.058797) | 0.340684 / 0.283200 (0.057485) | 0.106630 / 0.141683 (-0.035053) | 1.427898 / 1.452155 (-0.024257) | 1.547518 / 1.492716 (0.054801) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.296952 / 0.018006 (0.278945) | 0.515708 / 0.000490 (0.515218) | 0.004225 / 0.000200 (0.004025) | 0.000082 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029365 / 0.037411 (-0.008046) | 0.111142 / 0.014526 (0.096616) | 0.124414 / 0.176557 (-0.052142) | 0.185227 / 0.737135 (-0.551908) | 0.129545 / 0.296338 (-0.166793) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.403303 / 0.215209 (0.188094) | 4.044138 / 2.077655 (1.966483) | 1.803622 / 1.504120 (0.299502) | 1.615436 / 1.541195 (0.074242) | 1.703576 / 1.468490 (0.235086) | 0.706398 / 4.584777 (-3.878379) | 3.912995 / 3.745712 (0.167283) | 4.004575 / 5.269862 (-1.265287) | 2.101592 / 4.565676 (-2.464085) | 0.087280 / 0.424275 (-0.336995) | 0.012564 / 0.007607 (0.004957) | 0.508484 / 0.226044 (0.282440) | 5.089351 / 2.268929 (2.820422) | 2.269022 / 55.444624 (-53.175602) | 1.933375 / 6.876477 (-4.943102) | 2.136783 / 2.142072 (-0.005289) | 0.862624 / 4.805227 (-3.942603) | 0.172107 / 6.500664 (-6.328557) | 0.066694 / 0.075469 (-0.008775) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.172513 / 1.841788 (-0.669275) | 15.877519 / 8.074308 (7.803211) | 14.687476 / 10.191392 (4.496084) | 0.189392 / 0.680424 (-0.491032) | 0.017334 / 0.534201 (-0.516866) | 0.420201 / 0.579283 (-0.159082) | 0.418502 / 0.434364 (-0.015862) | 0.489130 / 0.540337 (-0.051207) | 0.580678 / 1.386936 (-0.806258) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007942 / 0.011353 (-0.003411) | 0.005312 / 0.011008 (-0.005696) | 0.074684 / 0.038508 (0.036176) | 0.035952 / 0.023109 (0.012843) | 0.349672 / 0.275898 (0.073774) | 0.377157 / 0.323480 (0.053678) | 0.006399 / 0.007986 (-0.001586) | 0.005769 / 0.004328 (0.001441) | 0.074283 / 0.004250 (0.070032) | 0.053217 / 0.037052 (0.016165) | 0.342545 / 0.258489 (0.084056) | 0.383663 / 0.293841 (0.089822) | 0.037234 / 0.128546 (-0.091312) | 0.012349 / 0.075646 (-0.063298) | 0.086522 / 0.419271 (-0.332749) | 0.049888 / 0.043533 (0.006355) | 0.337686 / 0.255139 (0.082547) | 0.361564 / 0.283200 (0.078365) | 0.104902 / 0.141683 (-0.036781) | 1.478259 / 1.452155 (0.026104) | 1.576376 / 1.492716 (0.083660) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.339760 / 0.018006 (0.321753) | 0.530946 / 0.000490 (0.530456) | 0.000474 / 0.000200 (0.000274) | 0.000063 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029685 / 0.037411 (-0.007726) | 0.109409 / 0.014526 (0.094883) | 0.125579 / 0.176557 (-0.050978) | 0.175378 / 0.737135 (-0.561757) | 0.130672 / 0.296338 (-0.165667) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.428456 / 0.215209 (0.213247) | 4.238731 / 2.077655 (2.161077) | 2.046703 / 1.504120 (0.542583) | 1.850701 / 1.541195 (0.309506) | 1.909290 / 1.468490 (0.440800) | 0.714314 / 4.584777 (-3.870463) | 3.816056 / 3.745712 (0.070344) | 2.118567 / 5.269862 (-3.151295) | 1.348017 / 4.565676 (-3.217659) | 0.087140 / 0.424275 (-0.337135) | 0.012546 / 0.007607 (0.004938) | 0.538041 / 0.226044 (0.311997) | 5.381822 / 2.268929 (3.112893) | 2.525685 / 55.444624 (-52.918939) | 2.178659 / 6.876477 (-4.697817) | 2.381054 / 2.142072 (0.238981) | 0.844404 / 4.805227 (-3.960823) | 0.171802 / 6.500664 (-6.328862) | 0.065630 / 0.075469 (-0.009839) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.262187 / 1.841788 (-0.579600) | 16.197668 / 8.074308 (8.123360) | 15.148636 / 10.191392 (4.957244) | 0.152601 / 0.680424 (-0.527823) | 0.020238 / 0.534201 (-0.513963) | 0.420141 / 0.579283 (-0.159142) | 0.416295 / 0.434364 (-0.018068) | 0.487051 / 0.540337 (-0.053286) | 0.581942 / 1.386936 (-0.804994) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9615e5af75b190c4e7b66792f9ba444f352765a0 \"CML watermark\")\n" ]
"2023-04-13T07:17:02"
"2023-04-13T09:48:10"
"2023-04-13T09:40:50"
MEMBER
null
Fix warnings in our CI tests.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5741/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5741/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5741", "html_url": "https://github.com/huggingface/datasets/pull/5741", "diff_url": "https://github.com/huggingface/datasets/pull/5741.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5741.patch", "merged_at": "2023-04-13T09:40:50" }
true
https://api.github.com/repos/huggingface/datasets/issues/5740
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5740/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5740/comments
https://api.github.com/repos/huggingface/datasets/issues/5740/events
https://github.com/huggingface/datasets/pull/5740
1,664,132,130
PR_kwDODunzps5OHI08
5,740
Fix CI mock filesystem fixtures
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007003 / 0.011353 (-0.004350) | 0.004854 / 0.011008 (-0.006154) | 0.096982 / 0.038508 (0.058474) | 0.033218 / 0.023109 (0.010109) | 0.314088 / 0.275898 (0.038190) | 0.351315 / 0.323480 (0.027835) | 0.005679 / 0.007986 (-0.002307) | 0.005404 / 0.004328 (0.001075) | 0.071773 / 0.004250 (0.067522) | 0.044593 / 0.037052 (0.007540) | 0.323643 / 0.258489 (0.065154) | 0.357172 / 0.293841 (0.063331) | 0.036782 / 0.128546 (-0.091764) | 0.012146 / 0.075646 (-0.063501) | 0.334874 / 0.419271 (-0.084397) | 0.051475 / 0.043533 (0.007942) | 0.305949 / 0.255139 (0.050810) | 0.339326 / 0.283200 (0.056126) | 0.101509 / 0.141683 (-0.040174) | 1.458254 / 1.452155 (0.006099) | 1.535252 / 1.492716 (0.042535) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.264837 / 0.018006 (0.246831) | 0.441444 / 0.000490 (0.440955) | 0.003331 / 0.000200 (0.003131) | 0.000084 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026529 / 0.037411 (-0.010882) | 0.105924 / 0.014526 (0.091398) | 0.117191 / 0.176557 (-0.059365) | 0.176606 / 0.737135 (-0.560529) | 0.123452 / 0.296338 (-0.172887) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.412351 / 0.215209 (0.197142) | 4.135468 / 2.077655 (2.057813) | 1.912820 / 1.504120 (0.408700) | 1.738993 / 1.541195 (0.197798) | 1.754228 / 1.468490 (0.285738) | 0.692239 / 4.584777 (-3.892538) | 3.765672 / 3.745712 (0.019959) | 2.081141 / 5.269862 (-3.188720) | 1.425153 / 4.565676 (-3.140523) | 0.085055 / 0.424275 (-0.339220) | 0.011918 / 0.007607 (0.004311) | 0.517573 / 0.226044 (0.291529) | 5.179809 / 2.268929 (2.910881) | 2.471620 / 55.444624 (-52.973005) | 2.140634 / 6.876477 (-4.735843) | 2.200150 / 2.142072 (0.058077) | 0.831662 / 4.805227 (-3.973566) | 0.168828 / 6.500664 (-6.331836) | 0.062755 / 0.075469 (-0.012714) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.196890 / 1.841788 (-0.644898) | 14.826423 / 8.074308 (6.752114) | 14.020782 / 10.191392 (3.829390) | 0.161275 / 0.680424 (-0.519149) | 0.017467 / 0.534201 (-0.516734) | 0.422278 / 0.579283 (-0.157005) | 0.424053 / 0.434364 (-0.010311) | 0.490768 / 0.540337 (-0.049570) | 0.584490 / 1.386936 (-0.802446) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007102 / 0.011353 (-0.004250) | 0.005145 / 0.011008 (-0.005863) | 0.073823 / 0.038508 (0.035315) | 0.032947 / 0.023109 (0.009838) | 0.336978 / 0.275898 (0.061080) | 0.368961 / 0.323480 (0.045481) | 0.006052 / 0.007986 (-0.001934) | 0.003970 / 0.004328 (-0.000358) | 0.072925 / 0.004250 (0.068674) | 0.044502 / 0.037052 (0.007450) | 0.340849 / 0.258489 (0.082360) | 0.381487 / 0.293841 (0.087646) | 0.037207 / 0.128546 (-0.091339) | 0.012095 / 0.075646 (-0.063551) | 0.085206 / 0.419271 (-0.334065) | 0.056236 / 0.043533 (0.012703) | 0.334048 / 0.255139 (0.078909) | 0.360442 / 0.283200 (0.077242) | 0.104402 / 0.141683 (-0.037281) | 1.446907 / 1.452155 (-0.005248) | 1.542430 / 1.492716 (0.049713) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.238720 / 0.018006 (0.220714) | 0.445857 / 0.000490 (0.445367) | 0.009280 / 0.000200 (0.009080) | 0.000150 / 0.000054 (0.000095) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028414 / 0.037411 (-0.008998) | 0.110506 / 0.014526 (0.095981) | 0.124593 / 0.176557 (-0.051964) | 0.170951 / 0.737135 (-0.566184) | 0.128033 / 0.296338 (-0.168305) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426206 / 0.215209 (0.210997) | 4.267289 / 2.077655 (2.189634) | 2.026880 / 1.504120 (0.522760) | 1.844052 / 1.541195 (0.302858) | 1.897697 / 1.468490 (0.429207) | 0.713545 / 4.584777 (-3.871232) | 3.815052 / 3.745712 (0.069339) | 3.217091 / 5.269862 (-2.052770) | 1.790546 / 4.565676 (-2.775130) | 0.087501 / 0.424275 (-0.336774) | 0.012136 / 0.007607 (0.004529) | 0.534495 / 0.226044 (0.308451) | 5.325913 / 2.268929 (3.056984) | 2.484309 / 55.444624 (-52.960315) | 2.149721 / 6.876477 (-4.726756) | 2.158764 / 2.142072 (0.016692) | 0.855273 / 4.805227 (-3.949954) | 0.170374 / 6.500664 (-6.330290) | 0.064053 / 0.075469 (-0.011416) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.253171 / 1.841788 (-0.588617) | 15.254562 / 8.074308 (7.180254) | 14.242119 / 10.191392 (4.050727) | 0.159298 / 0.680424 (-0.521126) | 0.017504 / 0.534201 (-0.516696) | 0.419710 / 0.579283 (-0.159574) | 0.417879 / 0.434364 (-0.016485) | 0.486328 / 0.540337 (-0.054009) | 0.578933 / 1.386936 (-0.808003) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#bc38663c8e2c2b0b246791c3ed8bddbff163dd64 \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008476 / 0.011353 (-0.002877) | 0.005745 / 0.011008 (-0.005263) | 0.115307 / 0.038508 (0.076799) | 0.039356 / 0.023109 (0.016247) | 0.367155 / 0.275898 (0.091257) | 0.422147 / 0.323480 (0.098667) | 0.006817 / 0.007986 (-0.001168) | 0.004652 / 0.004328 (0.000323) | 0.084045 / 0.004250 (0.079795) | 0.055483 / 0.037052 (0.018431) | 0.364249 / 0.258489 (0.105760) | 0.415975 / 0.293841 (0.122134) | 0.041322 / 0.128546 (-0.087224) | 0.014178 / 0.075646 (-0.061469) | 0.392658 / 0.419271 (-0.026614) | 0.060156 / 0.043533 (0.016623) | 0.373938 / 0.255139 (0.118799) | 0.397494 / 0.283200 (0.114294) | 0.113811 / 0.141683 (-0.027872) | 1.688581 / 1.452155 (0.236427) | 1.790374 / 1.492716 (0.297658) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222203 / 0.018006 (0.204196) | 0.471109 / 0.000490 (0.470619) | 0.007071 / 0.000200 (0.006871) | 0.000156 / 0.000054 (0.000102) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032112 / 0.037411 (-0.005299) | 0.118726 / 0.014526 (0.104200) | 0.134918 / 0.176557 (-0.041639) | 0.207766 / 0.737135 (-0.529369) | 0.139756 / 0.296338 (-0.156582) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.479858 / 0.215209 (0.264649) | 4.798428 / 2.077655 (2.720773) | 2.221573 / 1.504120 (0.717453) | 1.964956 / 1.541195 (0.423761) | 2.021763 / 1.468490 (0.553273) | 0.820401 / 4.584777 (-3.764376) | 4.533887 / 3.745712 (0.788175) | 4.121332 / 5.269862 (-1.148529) | 2.195807 / 4.565676 (-2.369869) | 0.103133 / 0.424275 (-0.321142) | 0.014620 / 0.007607 (0.007013) | 0.605012 / 0.226044 (0.378967) | 5.966623 / 2.268929 (3.697694) | 2.844118 / 55.444624 (-52.600506) | 2.463569 / 6.876477 (-4.412907) | 2.597177 / 2.142072 (0.455105) | 0.983201 / 4.805227 (-3.822026) | 0.199500 / 6.500664 (-6.301164) | 0.078387 / 0.075469 (0.002918) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.401083 / 1.841788 (-0.440705) | 17.258725 / 8.074308 (9.184417) | 16.825992 / 10.191392 (6.634600) | 0.216762 / 0.680424 (-0.463662) | 0.021135 / 0.534201 (-0.513066) | 0.513688 / 0.579283 (-0.065595) | 0.488892 / 0.434364 (0.054529) | 0.566745 / 0.540337 (0.026408) | 0.688958 / 1.386936 (-0.697978) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007948 / 0.011353 (-0.003405) | 0.005981 / 0.011008 (-0.005027) | 0.084474 / 0.038508 (0.045966) | 0.037952 / 0.023109 (0.014843) | 0.383359 / 0.275898 (0.107461) | 0.409324 / 0.323480 (0.085844) | 0.006641 / 0.007986 (-0.001344) | 0.004785 / 0.004328 (0.000456) | 0.083214 / 0.004250 (0.078964) | 0.053177 / 0.037052 (0.016125) | 0.393147 / 0.258489 (0.134658) | 0.438496 / 0.293841 (0.144655) | 0.042090 / 0.128546 (-0.086456) | 0.013373 / 0.075646 (-0.062273) | 0.097585 / 0.419271 (-0.321686) | 0.056359 / 0.043533 (0.012826) | 0.378113 / 0.255139 (0.122974) | 0.403874 / 0.283200 (0.120674) | 0.123503 / 0.141683 (-0.018180) | 1.639557 / 1.452155 (0.187403) | 1.759787 / 1.492716 (0.267071) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.242534 / 0.018006 (0.224528) | 0.459040 / 0.000490 (0.458550) | 0.000454 / 0.000200 (0.000254) | 0.000066 / 0.000054 (0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031747 / 0.037411 (-0.005664) | 0.125823 / 0.014526 (0.111297) | 0.138985 / 0.176557 (-0.037571) | 0.194371 / 0.737135 (-0.542764) | 0.148905 / 0.296338 (-0.147433) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.508201 / 0.215209 (0.292992) | 5.007519 / 2.077655 (2.929865) | 2.412956 / 1.504120 (0.908836) | 2.143378 / 1.541195 (0.602183) | 2.192966 / 1.468490 (0.724476) | 0.828497 / 4.584777 (-3.756280) | 4.496457 / 3.745712 (0.750745) | 2.397546 / 5.269862 (-2.872315) | 1.522889 / 4.565676 (-3.042787) | 0.099904 / 0.424275 (-0.324371) | 0.014561 / 0.007607 (0.006954) | 0.627417 / 0.226044 (0.401373) | 6.296441 / 2.268929 (4.027512) | 2.962858 / 55.444624 (-52.481767) | 2.543083 / 6.876477 (-4.333394) | 2.711884 / 2.142072 (0.569811) | 0.997969 / 4.805227 (-3.807259) | 0.200283 / 6.500664 (-6.300382) | 0.075934 / 0.075469 (0.000465) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.541707 / 1.841788 (-0.300081) | 17.791559 / 8.074308 (9.717251) | 16.782877 / 10.191392 (6.591485) | 0.171954 / 0.680424 (-0.508470) | 0.020506 / 0.534201 (-0.513695) | 0.504189 / 0.579283 (-0.075094) | 0.501655 / 0.434364 (0.067291) | 0.583120 / 0.540337 (0.042782) | 0.694931 / 1.386936 (-0.692005) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#53355f308f4ffb9b4071f5d420b5c6767799ef1c \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007613 / 0.011353 (-0.003740) | 0.005057 / 0.011008 (-0.005951) | 0.099147 / 0.038508 (0.060639) | 0.035358 / 0.023109 (0.012249) | 0.303442 / 0.275898 (0.027544) | 0.336898 / 0.323480 (0.013418) | 0.006216 / 0.007986 (-0.001770) | 0.004085 / 0.004328 (-0.000244) | 0.074567 / 0.004250 (0.070317) | 0.050917 / 0.037052 (0.013865) | 0.301786 / 0.258489 (0.043297) | 0.341362 / 0.293841 (0.047521) | 0.037019 / 0.128546 (-0.091528) | 0.011977 / 0.075646 (-0.063669) | 0.334688 / 0.419271 (-0.084583) | 0.051326 / 0.043533 (0.007793) | 0.299878 / 0.255139 (0.044739) | 0.325571 / 0.283200 (0.042371) | 0.110744 / 0.141683 (-0.030939) | 1.480898 / 1.452155 (0.028743) | 1.566917 / 1.492716 (0.074201) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.253249 / 0.018006 (0.235242) | 0.558576 / 0.000490 (0.558086) | 0.003838 / 0.000200 (0.003638) | 0.000085 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028731 / 0.037411 (-0.008681) | 0.110643 / 0.014526 (0.096117) | 0.119560 / 0.176557 (-0.056996) | 0.178010 / 0.737135 (-0.559126) | 0.130286 / 0.296338 (-0.166053) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.400190 / 0.215209 (0.184981) | 3.999326 / 2.077655 (1.921672) | 1.797332 / 1.504120 (0.293212) | 1.610808 / 1.541195 (0.069613) | 1.679949 / 1.468490 (0.211459) | 0.696539 / 4.584777 (-3.888238) | 3.784766 / 3.745712 (0.039054) | 2.205008 / 5.269862 (-3.064854) | 1.501697 / 4.565676 (-3.063979) | 0.085553 / 0.424275 (-0.338723) | 0.012223 / 0.007607 (0.004616) | 0.494858 / 0.226044 (0.268813) | 4.968535 / 2.268929 (2.699606) | 2.258759 / 55.444624 (-53.185865) | 1.926236 / 6.876477 (-4.950241) | 2.072155 / 2.142072 (-0.069917) | 0.838354 / 4.805227 (-3.966873) | 0.168810 / 6.500664 (-6.331854) | 0.064347 / 0.075469 (-0.011122) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.166696 / 1.841788 (-0.675091) | 14.721287 / 8.074308 (6.646979) | 14.319272 / 10.191392 (4.127880) | 0.144534 / 0.680424 (-0.535890) | 0.017502 / 0.534201 (-0.516699) | 0.422682 / 0.579283 (-0.156601) | 0.424426 / 0.434364 (-0.009938) | 0.493561 / 0.540337 (-0.046777) | 0.586765 / 1.386936 (-0.800171) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007764 / 0.011353 (-0.003589) | 0.005516 / 0.011008 (-0.005492) | 0.074745 / 0.038508 (0.036237) | 0.034364 / 0.023109 (0.011255) | 0.344318 / 0.275898 (0.068420) | 0.374779 / 0.323480 (0.051299) | 0.005904 / 0.007986 (-0.002082) | 0.004323 / 0.004328 (-0.000005) | 0.073191 / 0.004250 (0.068941) | 0.051549 / 0.037052 (0.014496) | 0.341792 / 0.258489 (0.083303) | 0.387576 / 0.293841 (0.093735) | 0.037483 / 0.128546 (-0.091063) | 0.012410 / 0.075646 (-0.063237) | 0.086480 / 0.419271 (-0.332791) | 0.050035 / 0.043533 (0.006502) | 0.335475 / 0.255139 (0.080336) | 0.361436 / 0.283200 (0.078236) | 0.106890 / 0.141683 (-0.034792) | 1.464032 / 1.452155 (0.011877) | 1.563490 / 1.492716 (0.070774) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.268765 / 0.018006 (0.250758) | 0.563811 / 0.000490 (0.563321) | 0.004904 / 0.000200 (0.004704) | 0.000096 / 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029885 / 0.037411 (-0.007526) | 0.113885 / 0.014526 (0.099359) | 0.124283 / 0.176557 (-0.052274) | 0.173619 / 0.737135 (-0.563517) | 0.131781 / 0.296338 (-0.164557) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420296 / 0.215209 (0.205087) | 4.167656 / 2.077655 (2.090001) | 1.982356 / 1.504120 (0.478237) | 1.792181 / 1.541195 (0.250986) | 1.871459 / 1.468490 (0.402969) | 0.707066 / 4.584777 (-3.877711) | 3.835922 / 3.745712 (0.090210) | 3.506796 / 5.269862 (-1.763066) | 1.857172 / 4.565676 (-2.708505) | 0.086219 / 0.424275 (-0.338056) | 0.012404 / 0.007607 (0.004796) | 0.512393 / 0.226044 (0.286348) | 5.111623 / 2.268929 (2.842695) | 2.493523 / 55.444624 (-52.951101) | 2.188220 / 6.876477 (-4.688257) | 2.319096 / 2.142072 (0.177024) | 0.844084 / 4.805227 (-3.961144) | 0.171130 / 6.500664 (-6.329534) | 0.065913 / 0.075469 (-0.009556) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.284768 / 1.841788 (-0.557020) | 15.334610 / 8.074308 (7.260301) | 14.724436 / 10.191392 (4.533044) | 0.188425 / 0.680424 (-0.491999) | 0.017984 / 0.534201 (-0.516217) | 0.428150 / 0.579283 (-0.151133) | 0.429013 / 0.434364 (-0.005351) | 0.500818 / 0.540337 (-0.039519) | 0.592879 / 1.386936 (-0.794057) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ee68da958c2fab3a26d9f0efb1e207ecbcf7ce15 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006870 / 0.011353 (-0.004483) | 0.004702 / 0.011008 (-0.006306) | 0.099258 / 0.038508 (0.060750) | 0.029008 / 0.023109 (0.005899) | 0.330599 / 0.275898 (0.054701) | 0.361163 / 0.323480 (0.037683) | 0.005020 / 0.007986 (-0.002965) | 0.003474 / 0.004328 (-0.000855) | 0.075902 / 0.004250 (0.071651) | 0.037462 / 0.037052 (0.000410) | 0.336213 / 0.258489 (0.077724) | 0.370645 / 0.293841 (0.076804) | 0.032435 / 0.128546 (-0.096111) | 0.011686 / 0.075646 (-0.063960) | 0.326040 / 0.419271 (-0.093232) | 0.043750 / 0.043533 (0.000217) | 0.332629 / 0.255139 (0.077490) | 0.353302 / 0.283200 (0.070102) | 0.090421 / 0.141683 (-0.051262) | 1.470097 / 1.452155 (0.017942) | 1.544908 / 1.492716 (0.052191) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.213418 / 0.018006 (0.195411) | 0.434808 / 0.000490 (0.434319) | 0.005949 / 0.000200 (0.005749) | 0.000072 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023085 / 0.037411 (-0.014327) | 0.098222 / 0.014526 (0.083696) | 0.104543 / 0.176557 (-0.072013) | 0.165423 / 0.737135 (-0.571713) | 0.108732 / 0.296338 (-0.187606) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.433933 / 0.215209 (0.218724) | 4.334358 / 2.077655 (2.256704) | 2.013984 / 1.504120 (0.509864) | 1.862981 / 1.541195 (0.321787) | 1.873936 / 1.468490 (0.405446) | 0.699857 / 4.584777 (-3.884920) | 3.417815 / 3.745712 (-0.327897) | 1.946403 / 5.269862 (-3.323459) | 1.308683 / 4.565676 (-3.256994) | 0.083297 / 0.424275 (-0.340978) | 0.012610 / 0.007607 (0.005003) | 0.540877 / 0.226044 (0.314832) | 5.408293 / 2.268929 (3.139365) | 2.529574 / 55.444624 (-52.915050) | 2.201047 / 6.876477 (-4.675429) | 2.392966 / 2.142072 (0.250894) | 0.812719 / 4.805227 (-3.992509) | 0.154013 / 6.500664 (-6.346651) | 0.067614 / 0.075469 (-0.007855) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.228150 / 1.841788 (-0.613638) | 14.037090 / 8.074308 (5.962782) | 14.259416 / 10.191392 (4.068024) | 0.155554 / 0.680424 (-0.524870) | 0.016521 / 0.534201 (-0.517680) | 0.379615 / 0.579283 (-0.199668) | 0.421352 / 0.434364 (-0.013012) | 0.446512 / 0.540337 (-0.093825) | 0.531802 / 1.386936 (-0.855134) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006629 / 0.011353 (-0.004724) | 0.004432 / 0.011008 (-0.006577) | 0.076662 / 0.038508 (0.038154) | 0.027674 / 0.023109 (0.004565) | 0.341667 / 0.275898 (0.065769) | 0.376493 / 0.323480 (0.053014) | 0.005076 / 0.007986 (-0.002910) | 0.004655 / 0.004328 (0.000326) | 0.075698 / 0.004250 (0.071448) | 0.036905 / 0.037052 (-0.000147) | 0.342394 / 0.258489 (0.083905) | 0.383330 / 0.293841 (0.089489) | 0.031729 / 0.128546 (-0.096817) | 0.011582 / 0.075646 (-0.064064) | 0.085721 / 0.419271 (-0.333551) | 0.042012 / 0.043533 (-0.001521) | 0.342063 / 0.255139 (0.086924) | 0.367335 / 0.283200 (0.084136) | 0.089641 / 0.141683 (-0.052042) | 1.520353 / 1.452155 (0.068198) | 1.643653 / 1.492716 (0.150937) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.178995 / 0.018006 (0.160989) | 0.436544 / 0.000490 (0.436055) | 0.002311 / 0.000200 (0.002111) | 0.000081 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025386 / 0.037411 (-0.012026) | 0.099717 / 0.014526 (0.085192) | 0.110809 / 0.176557 (-0.065747) | 0.162931 / 0.737135 (-0.574204) | 0.110430 / 0.296338 (-0.185909) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.438592 / 0.215209 (0.223382) | 4.372560 / 2.077655 (2.294905) | 2.069686 / 1.504120 (0.565567) | 1.860576 / 1.541195 (0.319382) | 1.898161 / 1.468490 (0.429671) | 0.698353 / 4.584777 (-3.886424) | 3.462440 / 3.745712 (-0.283272) | 1.868602 / 5.269862 (-3.401260) | 1.160498 / 4.565676 (-3.405179) | 0.082869 / 0.424275 (-0.341406) | 0.012690 / 0.007607 (0.005083) | 0.533278 / 0.226044 (0.307233) | 5.386214 / 2.268929 (3.117285) | 2.519243 / 55.444624 (-52.925382) | 2.171109 / 6.876477 (-4.705368) | 2.272617 / 2.142072 (0.130544) | 0.805843 / 4.805227 (-3.999384) | 0.152275 / 6.500664 (-6.348389) | 0.068038 / 0.075469 (-0.007431) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.291967 / 1.841788 (-0.549821) | 14.386474 / 8.074308 (6.312166) | 14.180693 / 10.191392 (3.989301) | 0.131714 / 0.680424 (-0.548710) | 0.016596 / 0.534201 (-0.517605) | 0.384293 / 0.579283 (-0.194990) | 0.404051 / 0.434364 (-0.030313) | 0.452167 / 0.540337 (-0.088170) | 0.542718 / 1.386936 (-0.844218) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f9c770bb1a43fa7fe390286d7535266d3964d067 \"CML watermark\")\n" ]
"2023-04-12T08:52:35"
"2023-04-13T11:01:24"
"2023-04-13T10:54:13"
MEMBER
null
This PR fixes the fixtures of our CI mock filesystems. Before, we had to pass `clobber=True` to `fsspec.register_implementation` to overwrite the still present previously added "mock" filesystem. That meant that the mock filesystem fixture was not working properly, because the previously added "mock" filesystem, should have been deleted by the fixture. This PR fixes the mock filesystem fixtures, so that the "mock" filesystem is properly deleted from the inner `fsspec` registry. Tests were added to check the correct behavior of the mock filesystem fixtures. Related to: - #5733
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5740/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5740/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5740", "html_url": "https://github.com/huggingface/datasets/pull/5740", "diff_url": "https://github.com/huggingface/datasets/pull/5740.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5740.patch", "merged_at": "2023-04-13T10:54:13" }
true
https://api.github.com/repos/huggingface/datasets/issues/5739
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5739/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5739/comments
https://api.github.com/repos/huggingface/datasets/issues/5739/events
https://github.com/huggingface/datasets/issues/5739
1,663,762,901
I_kwDODunzps5jKwHV
5,739
weird result during dataset split when data path starts with `/data`
{ "login": "ericxsun", "id": 1772912, "node_id": "MDQ6VXNlcjE3NzI5MTI=", "avatar_url": "https://avatars.githubusercontent.com/u/1772912?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ericxsun", "html_url": "https://github.com/ericxsun", "followers_url": "https://api.github.com/users/ericxsun/followers", "following_url": "https://api.github.com/users/ericxsun/following{/other_user}", "gists_url": "https://api.github.com/users/ericxsun/gists{/gist_id}", "starred_url": "https://api.github.com/users/ericxsun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ericxsun/subscriptions", "organizations_url": "https://api.github.com/users/ericxsun/orgs", "repos_url": "https://api.github.com/users/ericxsun/repos", "events_url": "https://api.github.com/users/ericxsun/events{/privacy}", "received_events_url": "https://api.github.com/users/ericxsun/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Same problem.", "hi! \r\nI think you can run python from `/data/train/raw/` directory and load dataset as `load_dataset(\"code_contests\")` to mitigate this issue as a workaround. \r\n@ericxsun Do you want to open a PR to fix the regex? As you already found the solution :) ", "> hi! I think you can run python from `/data/train/raw/` directory and load dataset as `load_dataset(\"code_contests\")` to mitigate this issue as a workaround. @ericxsun Do you want to open a PR to fix the regex? As you already found the solution :)\r\n\r\nSure, please see https://github.com/huggingface/datasets/pull/5748 @polinaeterna ", "I think `string_to_dict` is ok, and that the issue is that it gets `'/data2/train/raw/code_contests/data/test-00000-of-00001-9c49eeff30aacaa8.parquet'` as input instead of `'data/test-00000-of-00001-9c49eeff30aacaa8.parquet'`. The path should be relative to the directory being loaded by `load_dataset`" ]
"2023-04-12T04:51:35"
"2023-04-21T14:20:59"
null
NONE
null
### Describe the bug The regex defined here https://github.com/huggingface/datasets/blob/f2607935c4e45c70c44fcb698db0363ca7ba83d4/src/datasets/utils/py_utils.py#L158 will cause a weird result during dataset split when data path starts with `/data` ### Steps to reproduce the bug 1. clone dataset into local path ``` cd /data/train/raw/ git lfs clone https://huggingface.co/datasets/deepmind/code_contests.git ls /data/train/raw/code_contests # README.md data dataset_infos.json ls /data/train/raw/code_contests/data # test-00000-of-00001-9c49eeff30aacaa8.parquet # train-[0-9]+-of-[0-9]+-xx.parquet # valid-00000-of-00001-5e672c5751f060d3.parquet ``` 2. loading data from local ``` from datasets import load_dataset dataset = load_dataset('/data/train/raw/code_contests') FileNotFoundError: Unable to resolve any data file that matches '['data/train/raw/code_contests/data/train-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*']' at /data/train/raw/code_contests with any supported extension ``` weird path `data/train/raw/code_contests/data/train-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*` While dive deep into `LocalDatasetModuleFactoryWithoutScript` defined in [load.py](https://github.com/huggingface/datasets/blob/f2607935c4e45c70c44fcb698db0363ca7ba83d4/src/datasets/load.py#L627) and _get_data_files_patterns https://github.com/huggingface/datasets/blob/f2607935c4e45c70c44fcb698db0363ca7ba83d4/src/datasets/data_files.py#L228. I found the weird behavior caused by `string_to_dict` 3. check `string_to_dict` ``` p = '/data/train/raw/code_contests/data/test-00000-of-00001-9c49eeff30aacaa8.parquet' split_pattern = 'data/{split}-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*' string_to_dict(p, split_pattern) # {'split': 'train/raw/code_contests/data/test'} p = '/data2/train/raw/code_contests/data/test-00000-of-00001-9c49eeff30aacaa8.parquet' string_to_dict(p, split_pattern) {'split': 'test'} ``` go deep into string_to_dict https://github.com/huggingface/datasets/blob/f2607935c4e45c70c44fcb698db0363ca7ba83d4/src/datasets/utils/py_utils.py#L158. 4. test the regex: <img width="680" alt="image" src="https://user-images.githubusercontent.com/1772912/231351129-75179f01-fb9f-4f12-8fa9-0dfcc3d5f3bd.png"> <img width="679" alt="image" src="https://user-images.githubusercontent.com/1772912/231351025-009f3d83-2cf3-4e15-9ed4-6b9663dcb2ee.png"> ### Expected behavior statement in `steps to reproduce the bug` 3. check `string_to_dict` ``` p = '/data/train/raw/code_contests/data/test-00000-of-00001-9c49eeff30aacaa8.parquet' split_pattern = 'data/{split}-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*' string_to_dict(p, split_pattern) # {'split': 'train/raw/code_contests/data/test'} p = '/data2/train/raw/code_contests/data/test-00000-of-00001-9c49eeff30aacaa8.parquet' string_to_dict(p, split_pattern) {'split': 'test'} ``` ### Environment info - linux(debian) - python 3.7 - datasets 2.8.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5739/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5739/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5738
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5738/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5738/comments
https://api.github.com/repos/huggingface/datasets/issues/5738/events
https://github.com/huggingface/datasets/issues/5738
1,663,477,690
I_kwDODunzps5jJqe6
5,738
load_dataset("text","dataset.txt") loads the wrong dataset!
{ "login": "Tylersuard", "id": 41713505, "node_id": "MDQ6VXNlcjQxNzEzNTA1", "avatar_url": "https://avatars.githubusercontent.com/u/41713505?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Tylersuard", "html_url": "https://github.com/Tylersuard", "followers_url": "https://api.github.com/users/Tylersuard/followers", "following_url": "https://api.github.com/users/Tylersuard/following{/other_user}", "gists_url": "https://api.github.com/users/Tylersuard/gists{/gist_id}", "starred_url": "https://api.github.com/users/Tylersuard/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Tylersuard/subscriptions", "organizations_url": "https://api.github.com/users/Tylersuard/orgs", "repos_url": "https://api.github.com/users/Tylersuard/repos", "events_url": "https://api.github.com/users/Tylersuard/events{/privacy}", "received_events_url": "https://api.github.com/users/Tylersuard/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "You need to provide a text file as `data_files`, not as a configuration:\r\n\r\n```python\r\nmy_dataset = load_dataset(\"text\", data_files=\"TextFile.txt\")\r\n```\r\n\r\nOtherwise, since `data_files` is `None`, it picks up Colab's sample datasets from the `content` dir." ]
"2023-04-12T01:07:46"
"2023-04-19T12:08:27"
"2023-04-19T12:08:27"
NONE
null
### Describe the bug I am trying to load my own custom text dataset using the load_dataset function. My dataset is a bunch of ordered text, think along the lines of shakespeare plays. However, after I load the dataset and I inspect it, the dataset is a table with a bunch of latitude and longitude values! What in the world?? ### Steps to reproduce the bug my_dataset = load_dataset("text","TextFile.txt") my_dataset ### Expected behavior I expected the dataset to contain the actual data from the text document that I used. ### Environment info Google Colab
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5738/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5738/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5737
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5737/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5737/comments
https://api.github.com/repos/huggingface/datasets/issues/5737/events
https://github.com/huggingface/datasets/issues/5737
1,662,919,811
I_kwDODunzps5jHiSD
5,737
ClassLabel Error
{ "login": "mrcaelumn", "id": 10896776, "node_id": "MDQ6VXNlcjEwODk2Nzc2", "avatar_url": "https://avatars.githubusercontent.com/u/10896776?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrcaelumn", "html_url": "https://github.com/mrcaelumn", "followers_url": "https://api.github.com/users/mrcaelumn/followers", "following_url": "https://api.github.com/users/mrcaelumn/following{/other_user}", "gists_url": "https://api.github.com/users/mrcaelumn/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrcaelumn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrcaelumn/subscriptions", "organizations_url": "https://api.github.com/users/mrcaelumn/orgs", "repos_url": "https://api.github.com/users/mrcaelumn/repos", "events_url": "https://api.github.com/users/mrcaelumn/events{/privacy}", "received_events_url": "https://api.github.com/users/mrcaelumn/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi, you can use the `cast_column` function to change the feature type from a `Value(int64)` to `ClassLabel`:\r\n\r\n```py\r\ndataset = dataset.cast_column(\"label\", ClassLabel(names=[\"label_1\", \"label_2\", \"label_3\"]))\r\nprint(dataset.features)\r\n{'text': Value(dtype='string', id=None),\r\n 'label': ClassLabel(names=['label_1', 'label_2', 'label_3'], id=None)}\r\n```", "thank you @stevhliu, its worked. " ]
"2023-04-11T17:14:13"
"2023-04-13T16:49:57"
"2023-04-13T16:49:57"
NONE
null
### Describe the bug I still getting the error "call() takes 1 positional argument but 2 were given" even after ensuring that the value being passed to the label object is a single value and that the ClassLabel object has been created with the correct number of label classes ### Steps to reproduce the bug from datasets import ClassLabel, Dataset 1. Create the ClassLabel object with 3 label values and their corresponding names label_test = ClassLabel(num_classes=3, names=["label_1", "label_2", "label_3"]) 2. Define a dictionary with text and label fields data = { 'text': ['text_1', 'text_2', 'text_3'], 'label': [1, 2, 3], } 3. Create a Hugging Face dataset from the dictionary dataset = Dataset.from_dict(data) print(dataset.features) 4. Map the label values to their corresponding label names using the label object dataset = dataset.map(lambda example: {'text': example['text'], 'label': label_test(example['label'])}) 5. Print the resulting dataset print(dataset) ### Expected behavior I hope my label type is class label instead int. ### Environment info python 3.9 google colab
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5737/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5737/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5736
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5736/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5736/comments
https://api.github.com/repos/huggingface/datasets/issues/5736/events
https://github.com/huggingface/datasets/issues/5736
1,662,286,061
I_kwDODunzps5jFHjt
5,736
FORCE_REDOWNLOAD raises "Directory not empty" exception on second run
{ "login": "rcasero", "id": 1219084, "node_id": "MDQ6VXNlcjEyMTkwODQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1219084?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rcasero", "html_url": "https://github.com/rcasero", "followers_url": "https://api.github.com/users/rcasero/followers", "following_url": "https://api.github.com/users/rcasero/following{/other_user}", "gists_url": "https://api.github.com/users/rcasero/gists{/gist_id}", "starred_url": "https://api.github.com/users/rcasero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rcasero/subscriptions", "organizations_url": "https://api.github.com/users/rcasero/orgs", "repos_url": "https://api.github.com/users/rcasero/repos", "events_url": "https://api.github.com/users/rcasero/events{/privacy}", "received_events_url": "https://api.github.com/users/rcasero/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi ! I couldn't reproduce your issue :/\r\n\r\nIt seems that `shutil.rmtree` failed. It is supposed to work even if the directory is not empty, but you still end up with `OSError: [Errno 39] Directory not empty:`. Can you make sure another process is not using this directory at the same time ?", "I have the same error with `datasets==2.14.5` and `pyarrow==13.0.0`. Python 3.10.13", "I have same error. Any workaround?" ]
"2023-04-11T11:29:15"
"2023-11-30T07:16:58"
null
NONE
null
### Describe the bug Running `load_dataset(..., download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD)` twice raises a `Directory not empty` exception on the second run. ### Steps to reproduce the bug I cannot test this on datasets v2.11.0 due to #5711, but this happens in v2.10.1. 1. Set up a script `my_dataset.py` to generate and load an offline dataset. 2. Load it with ```python ds = datasets.load_dataset(path=/path/to/my_dataset.py, name='toy', data_dir=/path/to/my_dataset.py, cache_dir=cache_dir, download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD, ) ``` It loads fine ``` Dataset my_dataset downloaded and prepared to /path/to/cache/toy-..e05e/1.0.0/...5b4c. Subsequent calls will reuse this data. ``` 3. Try to load it again with the same snippet and the splits are generated, but at the end of the loading process it raises the error ``` 2023-04-11 12:10:19,965: DEBUG: open file: /path/to/cache/toy-..e05e/1.0.0/...5b4c.incomplete/dataset_info.json Traceback (most recent call last): File "<string>", line 2, in <module> File "/path/to/conda/environment/lib/python3.10/site-packages/datasets/load.py", line 1782, in load_dataset builder_instance.download_and_prepare( File "/path/to/conda/environment/lib/python3.10/site-packages/datasets/builder.py", line 852, in download_and_prepare with incomplete_dir(self._output_dir) as tmp_output_dir: File "/path/to/conda/environment/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/path/to/conda/environment/lib/python3.10/site-packages/datasets/builder.py", line 826, in incomplete_dir shutil.rmtree(dirname) File "/path/to/conda/environment/lib/python3.10/shutil.py", line 730, in rmtree onerror(os.rmdir, path, sys.exc_info()) File "/path/to/conda/environment/lib/python3.10/shutil.py", line 728, in rmtree os.rmdir(path) OSError: [Errno 39] Directory not empty: '/path/to/cache/toy-..e05e/1.0.0/...5b4c' ``` ### Expected behavior Regenerate the dataset from scratch and reload it. ### Environment info - `datasets` version: 2.10.1 - Platform: Linux-4.18.0-483.el8.x86_64-x86_64-with-glibc2.28 - Python version: 3.10.8 - PyArrow version: 11.0.0 - Pandas version: 1.5.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5736/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5736/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5735
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5735/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5735/comments
https://api.github.com/repos/huggingface/datasets/issues/5735/events
https://github.com/huggingface/datasets/pull/5735
1,662,150,903
PR_kwDODunzps5OAY3A
5,735
Implement sharding on merged iterable datasets
{ "login": "Hubert-Bonisseur", "id": 48770768, "node_id": "MDQ6VXNlcjQ4NzcwNzY4", "avatar_url": "https://avatars.githubusercontent.com/u/48770768?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Hubert-Bonisseur", "html_url": "https://github.com/Hubert-Bonisseur", "followers_url": "https://api.github.com/users/Hubert-Bonisseur/followers", "following_url": "https://api.github.com/users/Hubert-Bonisseur/following{/other_user}", "gists_url": "https://api.github.com/users/Hubert-Bonisseur/gists{/gist_id}", "starred_url": "https://api.github.com/users/Hubert-Bonisseur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Hubert-Bonisseur/subscriptions", "organizations_url": "https://api.github.com/users/Hubert-Bonisseur/orgs", "repos_url": "https://api.github.com/users/Hubert-Bonisseur/repos", "events_url": "https://api.github.com/users/Hubert-Bonisseur/events{/privacy}", "received_events_url": "https://api.github.com/users/Hubert-Bonisseur/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hi ! What if one of the sub-iterables only has one shard ? In that case I don't think we'd end up with a correctly interleaved dataset, since only rank 0 would yield examples from this sub-iterable", "Hi ! \r\nI just tested this out with the code below and it seems to be ok. Both datasets are alternating and we get all the examples with no duplicates.\r\n\r\nOn thing to keep in mind is that the max amount of workers is equal to the lowest amount of shard amongst the datasets to be merged (1 in this example).\r\n\r\n ```python\r\nfrom torch.utils.data import DataLoader\r\n\r\nfrom datasets import load_dataset, interleave_datasets\r\n\r\n\r\ndef process_dataset_train(batch):\r\n return {\"input\": f'train: {batch[\"review\"][:20]}'}\r\n\r\n\r\ndef process_dataset_test(batch):\r\n return {\"input\": f'test: {batch[\"review\"][:20]}'}\r\n\r\n\r\ndef identity_collator(x):\r\n return x\r\n\r\n\r\nif __name__ == \"__main__\":\r\n ds = load_dataset(\"lhoestq/demo1\")\r\n ds[\"train\"] = ds[\"train\"].map(process_dataset_train, remove_columns=ds[\"train\"].column_names)\r\n ds[\"test\"] = ds[\"test\"].map(process_dataset_test, remove_columns=ds[\"test\"].column_names)\r\n\r\n ds1 = ds[\"train\"].to_iterable_dataset(num_shards=5)\r\n ds2 = ds[\"test\"].to_iterable_dataset(num_shards=1)\r\n\r\n ds_merged = interleave_datasets([ds1, ds2], stopping_strategy=\"all_exhausted\")\r\n\r\n dataloader = DataLoader(ds_merged, collate_fn=identity_collator, num_workers=1, batch_size=1)\r\n\r\n for i, element in enumerate(dataloader):\r\n print(i, element)\r\n\r\n```\r\n\r\n```\r\n0 [{'input': 'train: Great app! The new v'}]\r\n1 [{'input': 'test: Works with RTL and N'}]\r\n2 [{'input': \"train: Great It's not fully\"}]\r\n3 [{'input': 'test: Works with RTL SDR W'}]\r\n4 [{'input': 'train: Works on a Nexus 6p '}]\r\n5 [{'input': 'test: Awsome App! Easy to '}]\r\n6 [{'input': 'train: The bandwidth seemed'}]\r\n7 [{'input': \"test: I'll forgo the refun\"}]\r\n8 [{'input': 'train: Works well with my H'}]\r\n9 [{'input': 'test: looks like a great p'}]\r\n```", "<s> Could you try with `num_workers>1` ? </s>\r\n\r\nedit: Oh I see\r\n\r\n> On thing to keep in mind is that the max amount of workers is equal to the lowest amount of shard amongst the datasets to be merged (1 in this example).", "Great ! It's ok to have the max amount of workers is equal to the lowest amount of shard :)\r\n\r\nSo in the case of `num_workers>min(n_shards_per_dataset)` maybe some workers should turn off, and a warning can probably be shown. This is already the case if you use a single dataset with a single shard and `num_workers>1`.\r\n\r\n\r\nRight now it seems to raise an error:\r\n\r\n```python\r\n File \"/Users/quentinlhoest/hf/datasets/src/datasets/iterable_dataset.py\", line 979, in __iter__\r\n yield from self._iter_pytorch(ex_iterable)\r\n File \"/Users/quentinlhoest/hf/datasets/src/datasets/iterable_dataset.py\", line 912, in _iter_pytorch\r\n for key, example in ex_iterable.shard_data_sources(worker_info.id, worker_info.num_workers):\r\n File \"/Users/quentinlhoest/hf/datasets/src/datasets/iterable_dataset.py\", line 259, in shard_data_sources\r\n [iterable.shard_data_sources(worker_id, num_workers) for iterable in self.ex_iterables],\r\n File \"/Users/quentinlhoest/hf/datasets/src/datasets/iterable_dataset.py\", line 259, in <listcomp>\r\n [iterable.shard_data_sources(worker_id, num_workers) for iterable in self.ex_iterables],\r\n File \"/Users/quentinlhoest/hf/datasets/src/datasets/iterable_dataset.py\", line 125, in shard_data_sources\r\n requested_gen_kwargs = _merge_gen_kwargs([gen_kwargs_list[i] for i in shard_indices])\r\n File \"/Users/quentinlhoest/hf/datasets/src/datasets/utils/sharding.py\", line 76, in _merge_gen_kwargs\r\n for key in gen_kwargs_list[0]\r\nIndexError: list index out of range\r\n```", "Good point. I have fixed the n_shards property of merged iterable datasets so that this warning is raised properly", "Hey @lhoestq, what do you think of the last modifications ? ", "Hello! No problem :)\r\n\r\n- About HorizontallyConcatenatedMultiSourcesExamplesIterable, I've haven't been able to create a bug with sharding. So either I missed something or it's working somehow:\r\n\r\n```python\r\nfrom torch.utils.data import DataLoader\r\n\r\nfrom datasets import load_dataset, interleave_datasets, concatenate_datasets\r\n\r\n\r\ndef process_dataset_train(batch):\r\n return {\"input\": f'train: {batch[\"review\"][:20]}'}\r\n\r\n\r\ndef process_dataset_test(batch):\r\n return {\"input\": f'test: {batch[\"review\"][:20]}'}\r\n\r\n\r\ndef identity_collator(x):\r\n return x\r\n\r\n\r\nif __name__ == \"__main__\":\r\n ds = load_dataset(\"lhoestq/demo1\")\r\n ds[\"train\"] = ds[\"train\"].map(process_dataset_train, remove_columns=ds[\"train\"].column_names)\r\n ds[\"test\"] = ds[\"test\"].map(process_dataset_test, remove_columns=ds[\"test\"].column_names)\r\n ds[\"test\"] = ds[\"test\"].rename_columns({\"input\": \"input2\"})\r\n\r\n ds1 = ds[\"train\"].to_iterable_dataset(num_shards=5)\r\n ds2 = ds[\"test\"].to_iterable_dataset(num_shards=3)\r\n\r\n ds_merged = concatenate_datasets([ds1, ds2], axis=1)\r\n\r\n #n_shards is always 1 for HorizontallyConcatenatedMultiSourcesExamplesIterable\r\n dataloader = DataLoader(ds_merged, collate_fn=identity_collator, num_workers=1, batch_size=1)\r\n\r\n for i, element in enumerate(dataloader):\r\n print(i, element)\r\n```\r\n\r\n```\r\n0 [{'input': 'train: Great app! The new v', 'input2': 'test: Works with RTL and N'}]\r\n1 [{'input': \"train: Great It's not fully\", 'input2': 'test: Works with RTL SDR W'}]\r\n2 [{'input': 'train: Works on a Nexus 6p ', 'input2': 'test: Awsome App! Easy to '}]\r\n3 [{'input': 'train: The bandwidth seemed', 'input2': \"test: I'll forgo the refun\"}]\r\n4 [{'input': 'train: Works well with my H', 'input2': 'test: looks like a great p'}]\r\n```\r\n\r\n- I've added a test but I'm not completely happy with it. My issue is that multiprocessing makes interleaving not completely deterministic as samples are yielded whenever ready by each process, if I'm correct.\r\nAs a result I opted to check for the amount of samples yielded and make that they are all unique, which should be equivalent.\r\nBut now my issue is that the \"first_exhausted\" method breaks the loop when one of the datasets of one of the shards is empty which means that all shards stop yielding and we could be missing up to n_workers samples. I don't know if this is the behaviour expected, but I had to modify the test to accomodate this.\r\n\r\nWhat are your thoughts about this ?", "Ah indeed it works because it's set to be only 1 shard - my bad :)", "> But now my issue is that the \"first_exhausted\" method breaks the loop when one of the datasets of one of the shards is empty which means that all shards stop yielding and we could be missing up to n_workers samples. I don't know if this is the behaviour expected, but I had to modify the test to accomodate this.\r\n\r\nThis looks reasonable, maybe this can be documented in the `interleave_datasets` docstring ?\r\n```\r\nNote for iterable datasets:\r\n\r\nIn a distributed setup or in PyTorch DataLoader workers, the stopping strategy is applied per process.\r\nTherefore the \"first_exhausted\" strategy on an sharded iterable dataset can generate less samples in total (up to 1 missing sample per subdataset per worker).\r\n```", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006441 / 0.011353 (-0.004912) | 0.004551 / 0.011008 (-0.006457) | 0.099144 / 0.038508 (0.060636) | 0.028163 / 0.023109 (0.005054) | 0.386342 / 0.275898 (0.110444) | 0.398347 / 0.323480 (0.074867) | 0.004836 / 0.007986 (-0.003150) | 0.004724 / 0.004328 (0.000395) | 0.076277 / 0.004250 (0.072027) | 0.036305 / 0.037052 (-0.000747) | 0.377179 / 0.258489 (0.118690) | 0.410694 / 0.293841 (0.116853) | 0.030196 / 0.128546 (-0.098351) | 0.011436 / 0.075646 (-0.064211) | 0.325911 / 0.419271 (-0.093360) | 0.043709 / 0.043533 (0.000177) | 0.375801 / 0.255139 (0.120662) | 0.396511 / 0.283200 (0.113311) | 0.088346 / 0.141683 (-0.053337) | 1.483427 / 1.452155 (0.031272) | 1.553708 / 1.492716 (0.060992) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.190974 / 0.018006 (0.172968) | 0.451309 / 0.000490 (0.450819) | 0.004045 / 0.000200 (0.003845) | 0.000077 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023814 / 0.037411 (-0.013597) | 0.096922 / 0.014526 (0.082396) | 0.101506 / 0.176557 (-0.075050) | 0.164694 / 0.737135 (-0.572441) | 0.106899 / 0.296338 (-0.189439) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.432164 / 0.215209 (0.216954) | 4.308076 / 2.077655 (2.230421) | 2.092434 / 1.504120 (0.588314) | 1.937405 / 1.541195 (0.396210) | 1.988030 / 1.468490 (0.519540) | 0.695476 / 4.584777 (-3.889301) | 3.436413 / 3.745712 (-0.309299) | 2.892954 / 5.269862 (-2.376908) | 1.519906 / 4.565676 (-3.045771) | 0.082579 / 0.424275 (-0.341696) | 0.012233 / 0.007607 (0.004626) | 0.531329 / 0.226044 (0.305284) | 5.365272 / 2.268929 (3.096344) | 2.391452 / 55.444624 (-53.053172) | 2.051116 / 6.876477 (-4.825361) | 2.140663 / 2.142072 (-0.001410) | 0.807262 / 4.805227 (-3.997966) | 0.151290 / 6.500664 (-6.349374) | 0.066137 / 0.075469 (-0.009333) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.193106 / 1.841788 (-0.648682) | 13.577240 / 8.074308 (5.502932) | 14.280126 / 10.191392 (4.088734) | 0.142538 / 0.680424 (-0.537886) | 0.016641 / 0.534201 (-0.517560) | 0.386318 / 0.579283 (-0.192965) | 0.385991 / 0.434364 (-0.048373) | 0.440712 / 0.540337 (-0.099625) | 0.524189 / 1.386936 (-0.862747) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006628 / 0.011353 (-0.004725) | 0.004664 / 0.011008 (-0.006344) | 0.077254 / 0.038508 (0.038746) | 0.028369 / 0.023109 (0.005259) | 0.343076 / 0.275898 (0.067178) | 0.376491 / 0.323480 (0.053011) | 0.005298 / 0.007986 (-0.002687) | 0.004853 / 0.004328 (0.000524) | 0.075927 / 0.004250 (0.071677) | 0.039951 / 0.037052 (0.002899) | 0.346225 / 0.258489 (0.087736) | 0.382367 / 0.293841 (0.088526) | 0.031133 / 0.128546 (-0.097413) | 0.011666 / 0.075646 (-0.063981) | 0.086383 / 0.419271 (-0.332889) | 0.042885 / 0.043533 (-0.000647) | 0.343885 / 0.255139 (0.088746) | 0.366840 / 0.283200 (0.083640) | 0.095942 / 0.141683 (-0.045741) | 1.528972 / 1.452155 (0.076817) | 1.586392 / 1.492716 (0.093676) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223952 / 0.018006 (0.205946) | 0.410767 / 0.000490 (0.410277) | 0.001014 / 0.000200 (0.000814) | 0.000067 / 0.000054 (0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024210 / 0.037411 (-0.013201) | 0.100308 / 0.014526 (0.085782) | 0.106899 / 0.176557 (-0.069658) | 0.156514 / 0.737135 (-0.580621) | 0.109548 / 0.296338 (-0.186790) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434763 / 0.215209 (0.219554) | 4.348485 / 2.077655 (2.270831) | 2.064255 / 1.504120 (0.560135) | 1.864394 / 1.541195 (0.323199) | 1.899732 / 1.468490 (0.431242) | 0.694147 / 4.584777 (-3.890630) | 3.357898 / 3.745712 (-0.387815) | 2.909155 / 5.269862 (-2.360707) | 1.424790 / 4.565676 (-3.140886) | 0.082597 / 0.424275 (-0.341678) | 0.012442 / 0.007607 (0.004835) | 0.538758 / 0.226044 (0.312713) | 5.390288 / 2.268929 (3.121359) | 2.532016 / 55.444624 (-52.912609) | 2.185724 / 6.876477 (-4.690753) | 2.274176 / 2.142072 (0.132104) | 0.804785 / 4.805227 (-4.000442) | 0.152649 / 6.500664 (-6.348015) | 0.067707 / 0.075469 (-0.007762) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.285219 / 1.841788 (-0.556568) | 13.958098 / 8.074308 (5.883790) | 14.043653 / 10.191392 (3.852261) | 0.144526 / 0.680424 (-0.535898) | 0.016813 / 0.534201 (-0.517388) | 0.390286 / 0.579283 (-0.188997) | 0.389184 / 0.434364 (-0.045180) | 0.470810 / 0.540337 (-0.069527) | 0.562391 / 1.386936 (-0.824545) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4bb172c9772858c188f85ffc9a51f8cb1da292a0 \"CML watermark\")\n" ]
"2023-04-11T10:02:25"
"2023-04-27T16:39:04"
"2023-04-27T16:32:09"
CONTRIBUTOR
null
This PR allows sharding of merged iterable datasets. Merged iterable datasets with for instance the `interleave_datasets` command are comprised of multiple sub-iterable, one for each dataset that has been merged. With this PR, sharding a merged iterable will result in multiple merged datasets each comprised of sharded sub-iterable, ensuring that there is no duplication of data. As a result it is now possible to set any amount of workers in the dataloader as long as it is lower or equal to the lowest amount of shards amongst the datasets. Before it had to be set to 0. I previously talked about this issue on the forum [here](https://discuss.huggingface.co/t/interleaving-iterable-dataset-with-num-workers-0/35801)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5735/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5735/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5735", "html_url": "https://github.com/huggingface/datasets/pull/5735", "diff_url": "https://github.com/huggingface/datasets/pull/5735.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5735.patch", "merged_at": "2023-04-27T16:32:09" }
true
https://api.github.com/repos/huggingface/datasets/issues/5734
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5734/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5734/comments
https://api.github.com/repos/huggingface/datasets/issues/5734/events
https://github.com/huggingface/datasets/issues/5734
1,662,058,028
I_kwDODunzps5jEP4s
5,734
Remove temporary pin of fsspec
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
"2023-04-11T09:04:17"
"2023-04-11T11:04:52"
"2023-04-11T11:04:52"
MEMBER
null
Once root cause is found and fixed, remove the temporary pin introduced by: - #5731
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5734/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5734/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5733
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5733/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5733/comments
https://api.github.com/repos/huggingface/datasets/issues/5733/events
https://github.com/huggingface/datasets/pull/5733
1,662,039,191
PR_kwDODunzps5OAA04
5,733
Unpin fsspec
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006240 / 0.011353 (-0.005113) | 0.004392 / 0.011008 (-0.006616) | 0.097276 / 0.038508 (0.058768) | 0.027262 / 0.023109 (0.004153) | 0.303203 / 0.275898 (0.027305) | 0.331878 / 0.323480 (0.008398) | 0.004706 / 0.007986 (-0.003279) | 0.004428 / 0.004328 (0.000100) | 0.074666 / 0.004250 (0.070416) | 0.036154 / 0.037052 (-0.000899) | 0.302997 / 0.258489 (0.044508) | 0.340350 / 0.293841 (0.046509) | 0.031011 / 0.128546 (-0.097535) | 0.011616 / 0.075646 (-0.064031) | 0.323671 / 0.419271 (-0.095601) | 0.042062 / 0.043533 (-0.001471) | 0.311381 / 0.255139 (0.056242) | 0.324697 / 0.283200 (0.041498) | 0.084248 / 0.141683 (-0.057435) | 1.471651 / 1.452155 (0.019496) | 1.533414 / 1.492716 (0.040697) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.193555 / 0.018006 (0.175549) | 0.393452 / 0.000490 (0.392962) | 0.002348 / 0.000200 (0.002148) | 0.000071 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022523 / 0.037411 (-0.014889) | 0.096552 / 0.014526 (0.082026) | 0.101746 / 0.176557 (-0.074810) | 0.163145 / 0.737135 (-0.573990) | 0.106417 / 0.296338 (-0.189921) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.448589 / 0.215209 (0.233380) | 4.467803 / 2.077655 (2.390148) | 2.178745 / 1.504120 (0.674625) | 1.983339 / 1.541195 (0.442145) | 2.056554 / 1.468490 (0.588064) | 0.697571 / 4.584777 (-3.887206) | 3.363967 / 3.745712 (-0.381745) | 1.872526 / 5.269862 (-3.397336) | 1.258245 / 4.565676 (-3.307432) | 0.082954 / 0.424275 (-0.341321) | 0.012306 / 0.007607 (0.004699) | 0.545096 / 0.226044 (0.319052) | 5.468706 / 2.268929 (3.199777) | 2.645333 / 55.444624 (-52.799292) | 2.287659 / 6.876477 (-4.588818) | 2.346768 / 2.142072 (0.204696) | 0.803730 / 4.805227 (-4.001497) | 0.151037 / 6.500664 (-6.349627) | 0.066404 / 0.075469 (-0.009065) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.192982 / 1.841788 (-0.648806) | 13.631225 / 8.074308 (5.556917) | 13.830053 / 10.191392 (3.638661) | 0.141901 / 0.680424 (-0.538523) | 0.016500 / 0.534201 (-0.517701) | 0.373268 / 0.579283 (-0.206015) | 0.380123 / 0.434364 (-0.054241) | 0.430786 / 0.540337 (-0.109551) | 0.512669 / 1.386936 (-0.874267) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006161 / 0.011353 (-0.005192) | 0.004399 / 0.011008 (-0.006609) | 0.076210 / 0.038508 (0.037702) | 0.026791 / 0.023109 (0.003681) | 0.341523 / 0.275898 (0.065625) | 0.370400 / 0.323480 (0.046920) | 0.004495 / 0.007986 (-0.003491) | 0.003204 / 0.004328 (-0.001125) | 0.075444 / 0.004250 (0.071194) | 0.035914 / 0.037052 (-0.001138) | 0.343806 / 0.258489 (0.085317) | 0.384320 / 0.293841 (0.090479) | 0.031438 / 0.128546 (-0.097109) | 0.011253 / 0.075646 (-0.064393) | 0.085364 / 0.419271 (-0.333908) | 0.041407 / 0.043533 (-0.002126) | 0.338831 / 0.255139 (0.083692) | 0.364357 / 0.283200 (0.081158) | 0.087417 / 0.141683 (-0.054266) | 1.520624 / 1.452155 (0.068470) | 1.572432 / 1.492716 (0.079716) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.232403 / 0.018006 (0.214396) | 0.388187 / 0.000490 (0.387698) | 0.001158 / 0.000200 (0.000958) | 0.000069 / 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024596 / 0.037411 (-0.012816) | 0.101203 / 0.014526 (0.086677) | 0.105243 / 0.176557 (-0.071314) | 0.158215 / 0.737135 (-0.578920) | 0.110277 / 0.296338 (-0.186061) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.435661 / 0.215209 (0.220452) | 4.350151 / 2.077655 (2.272496) | 2.072372 / 1.504120 (0.568252) | 1.870675 / 1.541195 (0.329480) | 1.910883 / 1.468490 (0.442393) | 0.697384 / 4.584777 (-3.887393) | 3.399377 / 3.745712 (-0.346335) | 2.685008 / 5.269862 (-2.584854) | 1.476843 / 4.565676 (-3.088834) | 0.083177 / 0.424275 (-0.341098) | 0.012413 / 0.007607 (0.004806) | 0.542543 / 0.226044 (0.316498) | 5.431422 / 2.268929 (3.162494) | 2.506419 / 55.444624 (-52.938206) | 2.166342 / 6.876477 (-4.710135) | 2.164421 / 2.142072 (0.022348) | 0.800609 / 4.805227 (-4.004618) | 0.150527 / 6.500664 (-6.350137) | 0.065780 / 0.075469 (-0.009689) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.293409 / 1.841788 (-0.548379) | 13.814898 / 8.074308 (5.740590) | 13.940416 / 10.191392 (3.749024) | 0.149377 / 0.680424 (-0.531047) | 0.016462 / 0.534201 (-0.517739) | 0.393748 / 0.579283 (-0.185535) | 0.384327 / 0.434364 (-0.050037) | 0.489900 / 0.540337 (-0.050437) | 0.574608 / 1.386936 (-0.812328) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f2607935c4e45c70c44fcb698db0363ca7ba83d4 \"CML watermark\")\n" ]
"2023-04-11T08:52:12"
"2023-04-11T11:11:45"
"2023-04-11T11:04:51"
MEMBER
null
In `fsspec--2023.4.0` default value for clobber when registering an implementation was changed from True to False. See: - https://github.com/fsspec/filesystem_spec/pull/1237 This PR recovers previous behavior by passing clobber True when registering mock implementations. This PR also removes the temporary pin introduced by: - #5731 Fix #5734.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5733/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5733/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5733", "html_url": "https://github.com/huggingface/datasets/pull/5733", "diff_url": "https://github.com/huggingface/datasets/pull/5733.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5733.patch", "merged_at": "2023-04-11T11:04:51" }
true
https://api.github.com/repos/huggingface/datasets/issues/5732
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5732/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5732/comments
https://api.github.com/repos/huggingface/datasets/issues/5732/events
https://github.com/huggingface/datasets/issues/5732
1,662,020,571
I_kwDODunzps5jEGvb
5,732
Enwik8 should support the standard split
{ "login": "lucaslingle", "id": 10287371, "node_id": "MDQ6VXNlcjEwMjg3Mzcx", "avatar_url": "https://avatars.githubusercontent.com/u/10287371?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lucaslingle", "html_url": "https://github.com/lucaslingle", "followers_url": "https://api.github.com/users/lucaslingle/followers", "following_url": "https://api.github.com/users/lucaslingle/following{/other_user}", "gists_url": "https://api.github.com/users/lucaslingle/gists{/gist_id}", "starred_url": "https://api.github.com/users/lucaslingle/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lucaslingle/subscriptions", "organizations_url": "https://api.github.com/users/lucaslingle/orgs", "repos_url": "https://api.github.com/users/lucaslingle/repos", "events_url": "https://api.github.com/users/lucaslingle/events{/privacy}", "received_events_url": "https://api.github.com/users/lucaslingle/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "lucaslingle", "id": 10287371, "node_id": "MDQ6VXNlcjEwMjg3Mzcx", "avatar_url": "https://avatars.githubusercontent.com/u/10287371?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lucaslingle", "html_url": "https://github.com/lucaslingle", "followers_url": "https://api.github.com/users/lucaslingle/followers", "following_url": "https://api.github.com/users/lucaslingle/following{/other_user}", "gists_url": "https://api.github.com/users/lucaslingle/gists{/gist_id}", "starred_url": "https://api.github.com/users/lucaslingle/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lucaslingle/subscriptions", "organizations_url": "https://api.github.com/users/lucaslingle/orgs", "repos_url": "https://api.github.com/users/lucaslingle/repos", "events_url": "https://api.github.com/users/lucaslingle/events{/privacy}", "received_events_url": "https://api.github.com/users/lucaslingle/received_events", "type": "User", "site_admin": false }
[ { "login": "lucaslingle", "id": 10287371, "node_id": "MDQ6VXNlcjEwMjg3Mzcx", "avatar_url": "https://avatars.githubusercontent.com/u/10287371?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lucaslingle", "html_url": "https://github.com/lucaslingle", "followers_url": "https://api.github.com/users/lucaslingle/followers", "following_url": "https://api.github.com/users/lucaslingle/following{/other_user}", "gists_url": "https://api.github.com/users/lucaslingle/gists{/gist_id}", "starred_url": "https://api.github.com/users/lucaslingle/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lucaslingle/subscriptions", "organizations_url": "https://api.github.com/users/lucaslingle/orgs", "repos_url": "https://api.github.com/users/lucaslingle/repos", "events_url": "https://api.github.com/users/lucaslingle/events{/privacy}", "received_events_url": "https://api.github.com/users/lucaslingle/received_events", "type": "User", "site_admin": false } ]
null
[ "#self-assign", "The Enwik8 pipeline is not present in this codebase, and is hosted elsewhere. I have opened a PR [there](https://huggingface.co/datasets/enwik8/discussions/4) instead. " ]
"2023-04-11T08:38:53"
"2023-04-11T09:28:17"
"2023-04-11T09:28:16"
NONE
null
### Feature request The HuggingFace Datasets library currently supports two BuilderConfigs for Enwik8. One config yields individual lines as examples, while the other config yields the entire dataset as a single example. Both support only a monolithic split: it is all grouped as "train". The HuggingFace Datasets library should include a BuilderConfig for Enwik8 with train, validation, and test sets derived from the first 90 million bytes, next 5 million bytes, and last 5 million bytes, respectively. This Enwik8 split is standard practice in LM papers, as elaborated and motivated below. ### Motivation Enwik8 is commonly split into 90M, 5M, 5M consecutive bytes. This is done in the Transformer-XL [codebase](https://github.com/kimiyoung/transformer-xl/blob/44781ed21dbaec88b280f74d9ae2877f52b492a5/getdata.sh#L34), and is additionally mentioned in the Sparse Transformers [paper](https://arxiv.org/abs/1904.10509) and the Compressive Transformers [paper](https://arxiv.org/abs/1911.05507). This split is pretty much universal among language modeling papers. One may obtain the splits by manual wrangling, using the data yielded by the ```enwik8-raw``` BuilderConfig. However, this undermines the seamless functionality of the library: one must slice the single raw example, extract it into three tensors, and wrap each in a separate dataset. This becomes even more of a nuisance if using the current Enwik8 HuggingFace dataset as a TfdsDataSource with [SeqIO](https://github.com/google/seqio), where a pipeline of preprocessors is typically included in a SeqIO Task definition, to be applied immediately after loading the data with TFDS. ### Your contribution Supporting this functionality in HuggingFace Datasets will only require an additional BuilderConfig for Enwik8 and a few additional lines of code. I will submit a PR.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5732/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5732/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5731
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5731/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5731/comments
https://api.github.com/repos/huggingface/datasets/issues/5731/events
https://github.com/huggingface/datasets/pull/5731
1,662,012,913
PR_kwDODunzps5N_7Un
5,731
Temporarily pin fsspec
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009735 / 0.011353 (-0.001618) | 0.010410 / 0.011008 (-0.000598) | 0.134986 / 0.038508 (0.096478) | 0.038392 / 0.023109 (0.015283) | 0.414451 / 0.275898 (0.138553) | 0.447775 / 0.323480 (0.124295) | 0.007223 / 0.007986 (-0.000763) | 0.006373 / 0.004328 (0.002045) | 0.102631 / 0.004250 (0.098381) | 0.048516 / 0.037052 (0.011464) | 0.410179 / 0.258489 (0.151690) | 0.467773 / 0.293841 (0.173932) | 0.053163 / 0.128546 (-0.075384) | 0.019801 / 0.075646 (-0.055845) | 0.452708 / 0.419271 (0.033436) | 0.068691 / 0.043533 (0.025159) | 0.405482 / 0.255139 (0.150343) | 0.457669 / 0.283200 (0.174470) | 0.113464 / 0.141683 (-0.028219) | 1.918143 / 1.452155 (0.465988) | 2.033123 / 1.492716 (0.540407) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.274564 / 0.018006 (0.256557) | 0.608855 / 0.000490 (0.608366) | 0.006266 / 0.000200 (0.006066) | 0.000105 / 0.000054 (0.000050) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033704 / 0.037411 (-0.003708) | 0.130982 / 0.014526 (0.116456) | 0.143862 / 0.176557 (-0.032694) | 0.212622 / 0.737135 (-0.524513) | 0.148899 / 0.296338 (-0.147439) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.670968 / 0.215209 (0.455759) | 6.602911 / 2.077655 (4.525256) | 2.644290 / 1.504120 (1.140171) | 2.268593 / 1.541195 (0.727399) | 2.325393 / 1.468490 (0.856903) | 1.388156 / 4.584777 (-3.196621) | 5.958569 / 3.745712 (2.212857) | 3.310756 / 5.269862 (-1.959106) | 2.390953 / 4.565676 (-2.174724) | 0.147416 / 0.424275 (-0.276859) | 0.015201 / 0.007607 (0.007594) | 0.794109 / 0.226044 (0.568064) | 7.984855 / 2.268929 (5.715926) | 3.382275 / 55.444624 (-52.062349) | 2.676102 / 6.876477 (-4.200375) | 2.846743 / 2.142072 (0.704671) | 1.467523 / 4.805227 (-3.337704) | 0.283184 / 6.500664 (-6.217480) | 0.088655 / 0.075469 (0.013186) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.632765 / 1.841788 (-0.209022) | 19.102473 / 8.074308 (11.028165) | 25.632535 / 10.191392 (15.441143) | 0.255628 / 0.680424 (-0.424795) | 0.034655 / 0.534201 (-0.499546) | 0.564593 / 0.579283 (-0.014690) | 0.668339 / 0.434364 (0.233975) | 0.648414 / 0.540337 (0.108076) | 0.766735 / 1.386936 (-0.620201) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009658 / 0.011353 (-0.001695) | 0.006690 / 0.011008 (-0.004318) | 0.099151 / 0.038508 (0.060643) | 0.037092 / 0.023109 (0.013983) | 0.470354 / 0.275898 (0.194456) | 0.525863 / 0.323480 (0.202383) | 0.007593 / 0.007986 (-0.000393) | 0.006637 / 0.004328 (0.002308) | 0.098782 / 0.004250 (0.094532) | 0.058524 / 0.037052 (0.021471) | 0.502569 / 0.258489 (0.244080) | 0.526410 / 0.293841 (0.232569) | 0.059486 / 0.128546 (-0.069060) | 0.019742 / 0.075646 (-0.055904) | 0.119715 / 0.419271 (-0.299556) | 0.065269 / 0.043533 (0.021736) | 0.483327 / 0.255139 (0.228188) | 0.506148 / 0.283200 (0.222948) | 0.123178 / 0.141683 (-0.018505) | 1.916624 / 1.452155 (0.464470) | 2.051410 / 1.492716 (0.558694) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.286481 / 0.018006 (0.268475) | 0.597300 / 0.000490 (0.596810) | 0.008906 / 0.000200 (0.008706) | 0.000128 / 0.000054 (0.000074) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031406 / 0.037411 (-0.006005) | 0.146748 / 0.014526 (0.132222) | 0.152898 / 0.176557 (-0.023658) | 0.212535 / 0.737135 (-0.524600) | 0.155577 / 0.296338 (-0.140761) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.660989 / 0.215209 (0.445780) | 6.688530 / 2.077655 (4.610875) | 3.039278 / 1.504120 (1.535159) | 2.660357 / 1.541195 (1.119162) | 2.696912 / 1.468490 (1.228422) | 1.259760 / 4.584777 (-3.325017) | 5.922452 / 3.745712 (2.176740) | 5.304200 / 5.269862 (0.034338) | 2.823928 / 4.565676 (-1.741748) | 0.148118 / 0.424275 (-0.276157) | 0.015575 / 0.007607 (0.007968) | 0.794404 / 0.226044 (0.568360) | 8.233651 / 2.268929 (5.964722) | 3.777482 / 55.444624 (-51.667142) | 3.064924 / 6.876477 (-3.811552) | 3.117803 / 2.142072 (0.975731) | 1.479559 / 4.805227 (-3.325668) | 0.254070 / 6.500664 (-6.246594) | 0.086806 / 0.075469 (0.011337) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.735515 / 1.841788 (-0.106273) | 18.934157 / 8.074308 (10.859848) | 22.645248 / 10.191392 (12.453856) | 0.227073 / 0.680424 (-0.453351) | 0.030650 / 0.534201 (-0.503551) | 0.594619 / 0.579283 (0.015336) | 0.653304 / 0.434364 (0.218940) | 0.707484 / 0.540337 (0.167147) | 0.823327 / 1.386936 (-0.563610) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#273392966e434286f4f5ba2ad596730bff11056d \"CML watermark\")\n" ]
"2023-04-11T08:33:15"
"2023-04-11T08:57:45"
"2023-04-11T08:47:55"
MEMBER
null
Fix #5730.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5731/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5731/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5731", "html_url": "https://github.com/huggingface/datasets/pull/5731", "diff_url": "https://github.com/huggingface/datasets/pull/5731.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5731.patch", "merged_at": "2023-04-11T08:47:55" }
true
https://api.github.com/repos/huggingface/datasets/issues/5730
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5730/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5730/comments
https://api.github.com/repos/huggingface/datasets/issues/5730/events
https://github.com/huggingface/datasets/issues/5730
1,662,007,926
I_kwDODunzps5jEDp2
5,730
CI is broken: ValueError: Name (mock) already in the registry and clobber is False
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
"2023-04-11T08:29:46"
"2023-04-11T08:47:56"
"2023-04-11T08:47:56"
MEMBER
null
CI is broken for `test_py310`. See: https://github.com/huggingface/datasets/actions/runs/4665326892/jobs/8258580948 ``` =========================== short test summary info ============================ ERROR tests/test_builder.py::test_builder_with_filesystem_download_and_prepare - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_builder.py::test_builder_with_filesystem_download_and_prepare_reload - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_dataset_dict.py::test_dummy_datasetdict_serialize_fs - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_file_utils.py::test_get_from_cache_fsspec - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_filesystem.py::test_is_remote_filesystem - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xexists[tmp_path/file.txt-True] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xexists[tmp_path/file_that_doesnt_exist.txt-False] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xexists[mock://top_level/second_level/date=2019-10-01/a.parquet-True] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xexists[mock://top_level/second_level/date=2019-10-01/file_that_doesnt_exist.parquet-False] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xlistdir[tmp_path-expected_paths0] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xlistdir[mock://-expected_paths1] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xlistdir[mock://top_level-expected_paths2] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xlistdir[mock://top_level/second_level/date=2019-10-01-expected_paths3] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xisdir[tmp_path-True] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xisdir[tmp_path/file.txt-False] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xisdir[mock://-True] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xisdir[mock://top_level-True] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xisdir[mock://dir_that_doesnt_exist-False] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xisfile[tmp_path/file.txt-True] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xisfile[tmp_path/file_that_doesnt_exist.txt-False] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xisfile[mock://-False] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xisfile[mock://top_level/second_level/date=2019-10-01/a.parquet-True] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xgetsize[tmp_path/file.txt-100] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xgetsize[mock://-0] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xgetsize[mock://top_level/second_level/date=2019-10-01/a.parquet-100] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xglob[tmp_path/*.txt-expected_paths0] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xglob[mock://*-expected_paths1] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xglob[mock://top_*-expected_paths2] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xglob[mock://top_level/second_level/date=2019-10-0[1-4]-expected_paths3] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xglob[mock://top_level/second_level/date=2019-10-0[1-4]/*-expected_paths4] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xwalk[tmp_path-expected_outputs0] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xwalk[mock://top_level/second_level-expected_outputs1] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_exists[tmp_path/file.txt-True] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_exists[tmp_path/file_that_doesnt_exist.txt-False] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_exists[mock://top_level/second_level/date=2019-10-01/a.parquet-True] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_exists[mock://top_level/second_level/date=2019-10-01/file_that_doesnt_exist.parquet-False] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_glob[tmp_path-*.txt-expected_paths0] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_glob[mock://-*-expected_paths1] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_glob[mock://-top_*-expected_paths2] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_glob[mock://top_level/second_level-date=2019-10-0[1-4]-expected_paths3] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_glob[mock://top_level/second_level-date=2019-10-0[1-4]/*-expected_paths4] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_rglob[tmp_path-*.txt-expected_paths0] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_rglob[mock://-date=2019-10-0[1-4]-expected_paths1] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_rglob[mock://top_level-date=2019-10-0[1-4]-expected_paths2] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_rglob[mock://-date=2019-10-0[1-4]/*-expected_paths3] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_rglob[mock://top_level-date=2019-10-0[1-4]/*-expected_paths4] - ValueError: Name (mock) already in the registry and clobber is False ===== 2105 passed, 18 skipped, 38 warnings, 46 errors in 236.22s (0:03:56) ===== ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5730/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5730/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5729
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5729/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5729/comments
https://api.github.com/repos/huggingface/datasets/issues/5729/events
https://github.com/huggingface/datasets/pull/5729
1,661,929,923
PR_kwDODunzps5N_pvI
5,729
Fix nondeterministic sharded data split order
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "The error in the CI was unrelated to this PR. I have merged main branch once that has been fixed.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006954 / 0.011353 (-0.004399) | 0.004947 / 0.011008 (-0.006061) | 0.086564 / 0.038508 (0.048056) | 0.031167 / 0.023109 (0.008058) | 0.262285 / 0.275898 (-0.013613) | 0.295753 / 0.323480 (-0.027727) | 0.005389 / 0.007986 (-0.002596) | 0.004130 / 0.004328 (-0.000198) | 0.065127 / 0.004250 (0.060877) | 0.042511 / 0.037052 (0.005458) | 0.263497 / 0.258489 (0.005008) | 0.307456 / 0.293841 (0.013615) | 0.031338 / 0.128546 (-0.097209) | 0.011023 / 0.075646 (-0.064623) | 0.295625 / 0.419271 (-0.123647) | 0.045813 / 0.043533 (0.002280) | 0.259369 / 0.255139 (0.004230) | 0.279325 / 0.283200 (-0.003875) | 0.099748 / 0.141683 (-0.041934) | 1.252572 / 1.452155 (-0.199583) | 1.347069 / 1.492716 (-0.145647) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.249726 / 0.018006 (0.231720) | 0.556882 / 0.000490 (0.556392) | 0.008237 / 0.000200 (0.008037) | 0.000294 / 0.000054 (0.000239) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026879 / 0.037411 (-0.010533) | 0.105141 / 0.014526 (0.090615) | 0.115473 / 0.176557 (-0.061084) | 0.172989 / 0.737135 (-0.564147) | 0.120433 / 0.296338 (-0.175906) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.400022 / 0.215209 (0.184812) | 3.965402 / 2.077655 (1.887747) | 1.805257 / 1.504120 (0.301138) | 1.610136 / 1.541195 (0.068941) | 1.661162 / 1.468490 (0.192672) | 0.695311 / 4.584777 (-3.889466) | 3.753757 / 3.745712 (0.008045) | 2.060609 / 5.269862 (-3.209253) | 1.333251 / 4.565676 (-3.232426) | 0.085790 / 0.424275 (-0.338485) | 0.012256 / 0.007607 (0.004649) | 0.502133 / 0.226044 (0.276088) | 5.040979 / 2.268929 (2.772051) | 2.310919 / 55.444624 (-53.133705) | 2.010534 / 6.876477 (-4.865943) | 2.132961 / 2.142072 (-0.009111) | 0.837636 / 4.805227 (-3.967592) | 0.169838 / 6.500664 (-6.330826) | 0.065003 / 0.075469 (-0.010466) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.218674 / 1.841788 (-0.623114) | 14.696076 / 8.074308 (6.621768) | 14.559492 / 10.191392 (4.368100) | 0.167761 / 0.680424 (-0.512663) | 0.017747 / 0.534201 (-0.516454) | 0.421624 / 0.579283 (-0.157659) | 0.414086 / 0.434364 (-0.020278) | 0.501398 / 0.540337 (-0.038940) | 0.596099 / 1.386936 (-0.790837) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007230 / 0.011353 (-0.004123) | 0.005345 / 0.011008 (-0.005664) | 0.073739 / 0.038508 (0.035231) | 0.033440 / 0.023109 (0.010330) | 0.339790 / 0.275898 (0.063892) | 0.367857 / 0.323480 (0.044377) | 0.005927 / 0.007986 (-0.002058) | 0.004279 / 0.004328 (-0.000049) | 0.074247 / 0.004250 (0.069996) | 0.048971 / 0.037052 (0.011918) | 0.340235 / 0.258489 (0.081746) | 0.380521 / 0.293841 (0.086680) | 0.035322 / 0.128546 (-0.093225) | 0.012416 / 0.075646 (-0.063230) | 0.086060 / 0.419271 (-0.333212) | 0.049331 / 0.043533 (0.005799) | 0.342871 / 0.255139 (0.087732) | 0.355673 / 0.283200 (0.072473) | 0.111976 / 0.141683 (-0.029707) | 1.462530 / 1.452155 (0.010375) | 1.550336 / 1.492716 (0.057620) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.266560 / 0.018006 (0.248554) | 0.550886 / 0.000490 (0.550396) | 0.001069 / 0.000200 (0.000869) | 0.000085 / 0.000054 (0.000031) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028701 / 0.037411 (-0.008711) | 0.110535 / 0.014526 (0.096010) | 0.122846 / 0.176557 (-0.053711) | 0.176395 / 0.737135 (-0.560740) | 0.128653 / 0.296338 (-0.167685) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431693 / 0.215209 (0.216484) | 4.283691 / 2.077655 (2.206036) | 2.013967 / 1.504120 (0.509847) | 1.823914 / 1.541195 (0.282719) | 1.872055 / 1.468490 (0.403565) | 0.703318 / 4.584777 (-3.881459) | 3.783412 / 3.745712 (0.037699) | 2.950147 / 5.269862 (-2.319715) | 1.826159 / 4.565676 (-2.739518) | 0.086897 / 0.424275 (-0.337379) | 0.012512 / 0.007607 (0.004905) | 0.526730 / 0.226044 (0.300685) | 5.263871 / 2.268929 (2.994943) | 2.552163 / 55.444624 (-52.892462) | 2.276216 / 6.876477 (-4.600261) | 2.419934 / 2.142072 (0.277862) | 0.848235 / 4.805227 (-3.956993) | 0.170405 / 6.500664 (-6.330259) | 0.064979 / 0.075469 (-0.010491) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.276780 / 1.841788 (-0.565008) | 15.100829 / 8.074308 (7.026521) | 15.117531 / 10.191392 (4.926139) | 0.147129 / 0.680424 (-0.533295) | 0.017806 / 0.534201 (-0.516395) | 0.422975 / 0.579283 (-0.156308) | 0.430286 / 0.434364 (-0.004078) | 0.501405 / 0.540337 (-0.038932) | 0.596810 / 1.386936 (-0.790126) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f6ee2e6603fe81638256d37a6aa7ad0400e31a83 \"CML watermark\")\n" ]
"2023-04-11T07:34:20"
"2023-04-26T15:12:25"
"2023-04-26T15:05:12"
MEMBER
null
This PR makes the order of the split names deterministic. Before it was nondeterministic because we were iterating over `set` elements. Fix #5728.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5729/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5729/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5729", "html_url": "https://github.com/huggingface/datasets/pull/5729", "diff_url": "https://github.com/huggingface/datasets/pull/5729.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5729.patch", "merged_at": "2023-04-26T15:05:12" }
true
https://api.github.com/repos/huggingface/datasets/issues/5728
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5728/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5728/comments
https://api.github.com/repos/huggingface/datasets/issues/5728/events
https://github.com/huggingface/datasets/issues/5728
1,661,925,932
I_kwDODunzps5jDvos
5,728
The order of data split names is nondeterministic
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
"2023-04-11T07:31:25"
"2023-04-26T15:05:13"
"2023-04-26T15:05:13"
MEMBER
null
After this CI error: https://github.com/huggingface/datasets/actions/runs/4639528358/jobs/8210492953?pr=5718 ``` FAILED tests/test_data_files.py::test_get_data_files_patterns[data_file_per_split4] - AssertionError: assert ['random', 'train'] == ['train', 'random'] At index 0 diff: 'random' != 'train' Full diff: - ['train', 'random'] + ['random', 'train'] ``` I have checked locally and found out that the data split order is nondeterministic. This is caused by the use of `set` for sharded splits.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5728/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5728/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5727
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5727/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5727/comments
https://api.github.com/repos/huggingface/datasets/issues/5727/events
https://github.com/huggingface/datasets/issues/5727
1,661,536,363
I_kwDODunzps5jCQhr
5,727
load_dataset fails with FileNotFound error on Windows
{ "login": "joelkowalewski", "id": 122648572, "node_id": "U_kgDOB093_A", "avatar_url": "https://avatars.githubusercontent.com/u/122648572?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joelkowalewski", "html_url": "https://github.com/joelkowalewski", "followers_url": "https://api.github.com/users/joelkowalewski/followers", "following_url": "https://api.github.com/users/joelkowalewski/following{/other_user}", "gists_url": "https://api.github.com/users/joelkowalewski/gists{/gist_id}", "starred_url": "https://api.github.com/users/joelkowalewski/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joelkowalewski/subscriptions", "organizations_url": "https://api.github.com/users/joelkowalewski/orgs", "repos_url": "https://api.github.com/users/joelkowalewski/repos", "events_url": "https://api.github.com/users/joelkowalewski/events{/privacy}", "received_events_url": "https://api.github.com/users/joelkowalewski/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi! Can you please paste the entire error stack trace, not only the last few lines?", "`----> 1 dataset = datasets.load_dataset(\"glue\", \"ax\")\r\n\r\nFile ~\\anaconda3\\envs\\huggingface\\Lib\\site-packages\\datasets\\load.py:1767, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)\r\n 1762 verification_mode = VerificationMode(\r\n 1763 (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS\r\n 1764 )\r\n 1766 # Create a dataset builder\r\n-> 1767 builder_instance = load_dataset_builder(\r\n 1768 path=path,\r\n 1769 name=name,\r\n 1770 data_dir=data_dir,\r\n 1771 data_files=data_files,\r\n 1772 cache_dir=cache_dir,\r\n 1773 features=features,\r\n 1774 download_config=download_config,\r\n 1775 download_mode=download_mode,\r\n 1776 revision=revision,\r\n 1777 use_auth_token=use_auth_token,\r\n 1778 storage_options=storage_options,\r\n 1779 **config_kwargs,\r\n 1780 )\r\n 1782 # Return iterable dataset in case of streaming\r\n 1783 if streaming:\r\n\r\nFile ~\\anaconda3\\envs\\huggingface\\Lib\\site-packages\\datasets\\load.py:1498, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, storage_options, **config_kwargs)\r\n 1496 download_config = download_config.copy() if download_config else DownloadConfig()\r\n 1497 download_config.use_auth_token = use_auth_token\r\n-> 1498 dataset_module = dataset_module_factory(\r\n 1499 path,\r\n 1500 revision=revision,\r\n 1501 download_config=download_config,\r\n 1502 download_mode=download_mode,\r\n 1503 data_dir=data_dir,\r\n 1504 data_files=data_files,\r\n 1505 )\r\n 1507 # Get dataset builder class from the processing script\r\n 1508 builder_cls = import_main_class(dataset_module.module_path)\r\n\r\nFile ~\\anaconda3\\envs\\huggingface\\Lib\\site-packages\\datasets\\load.py:1211, in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs)\r\n 1209 raise e1 from None\r\n 1210 if isinstance(e1, FileNotFoundError):\r\n-> 1211 raise FileNotFoundError(\r\n 1212 f\"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. \"\r\n 1213 f\"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}\"\r\n 1214 ) from None\r\n 1215 raise e1 from None\r\n 1216 else:`", "Okay, this is the issue:\r\n```\r\nFileNotFoundError: [WinError 3] The system cannot find the path specified: \r\n'C:\\\\Users\\\\...\\\\.cache\\\\huggingface'\r\n``` \r\n\r\nI don't remember seeing this error before.\r\n\r\nI guess it could happen in a multi-process environment if one of the processes deletes the `datasets` cache as the other one is loading a dataset (with `load_dataset`), so make sure that's not the case. Also, you can disable the Windows max path length limit (if enabled), but this is most likely not the problem.", "Closing due to inactivity." ]
"2023-04-10T23:21:12"
"2023-07-21T14:08:20"
"2023-07-21T14:08:19"
NONE
null
### Describe the bug Although I can import and run the datasets library in a Colab environment, I cannot successfully load any data on my own machine (Windows 10) despite following the install steps: (1) create conda environment (2) activate environment (3) install with: ``conda` install -c huggingface -c conda-forge datasets` Then ``` from datasets import load_dataset # this or any other example from the website fails with the FileNotFoundError glue = load_dataset("glue", "ax") ``` **Below I have pasted the error omitting the full path**: ``` raise FileNotFoundError( FileNotFoundError: Couldn't find a dataset script at C:\Users\...\glue\glue.py or any data file in the same directory. Couldn't find 'glue' on the Hugging Face Hub either: FileNotFoundError: [WinError 3] The system cannot find the path specified: 'C:\\Users\\...\\.cache\\huggingface' ``` ### Steps to reproduce the bug On Windows 10 1) create a minimal conda environment (with just Python) (2) activate environment (3) install datasets with: ``conda` install -c huggingface -c conda-forge datasets` (4) import load_dataset and follow example usage from any dataset card. ### Expected behavior The expected behavior is to load the file into the Python session running on my machine without error. ### Environment info ``` # Name Version Build Channel aiohttp 3.8.4 py311ha68e1ae_0 conda-forge aiosignal 1.3.1 pyhd8ed1ab_0 conda-forge arrow-cpp 11.0.0 h57928b3_13_cpu conda-forge async-timeout 4.0.2 pyhd8ed1ab_0 conda-forge attrs 22.2.0 pyh71513ae_0 conda-forge aws-c-auth 0.6.26 h1262f0c_1 conda-forge aws-c-cal 0.5.21 h7cda486_2 conda-forge aws-c-common 0.8.14 hcfcfb64_0 conda-forge aws-c-compression 0.2.16 h8a79959_5 conda-forge aws-c-event-stream 0.2.20 h5f78564_4 conda-forge aws-c-http 0.7.6 h2545be9_0 conda-forge aws-c-io 0.13.19 h0d2781e_3 conda-forge aws-c-mqtt 0.8.6 hd211e0c_12 conda-forge aws-c-s3 0.2.7 h8113e7b_1 conda-forge aws-c-sdkutils 0.1.8 h8a79959_0 conda-forge aws-checksums 0.1.14 h8a79959_5 conda-forge aws-crt-cpp 0.19.8 he6d3b81_12 conda-forge aws-sdk-cpp 1.10.57 h64004b3_8 conda-forge brotlipy 0.7.0 py311ha68e1ae_1005 conda-forge bzip2 1.0.8 h8ffe710_4 conda-forge c-ares 1.19.0 h2bbff1b_0 ca-certificates 2023.01.10 haa95532_0 certifi 2022.12.7 pyhd8ed1ab_0 conda-forge cffi 1.15.1 py311h7d9ee11_3 conda-forge charset-normalizer 2.1.1 pyhd8ed1ab_0 conda-forge colorama 0.4.6 pyhd8ed1ab_0 conda-forge cryptography 40.0.1 py311h28e9c30_0 conda-forge dataclasses 0.8 pyhc8e2a94_3 conda-forge datasets 2.11.0 py_0 huggingface dill 0.3.6 pyhd8ed1ab_1 conda-forge filelock 3.11.0 pyhd8ed1ab_0 conda-forge frozenlist 1.3.3 py311ha68e1ae_0 conda-forge fsspec 2023.4.0 pyh1a96a4e_0 conda-forge gflags 2.2.2 ha925a31_1004 conda-forge glog 0.6.0 h4797de2_0 conda-forge huggingface_hub 0.13.4 py_0 huggingface idna 3.4 pyhd8ed1ab_0 conda-forge importlib-metadata 6.3.0 pyha770c72_0 conda-forge importlib_metadata 6.3.0 hd8ed1ab_0 conda-forge intel-openmp 2023.0.0 h57928b3_25922 conda-forge krb5 1.20.1 heb0366b_0 conda-forge libabseil 20230125.0 cxx17_h63175ca_1 conda-forge libarrow 11.0.0 h04c43f8_13_cpu conda-forge libblas 3.9.0 16_win64_mkl conda-forge libbrotlicommon 1.0.9 hcfcfb64_8 conda-forge libbrotlidec 1.0.9 hcfcfb64_8 conda-forge libbrotlienc 1.0.9 hcfcfb64_8 conda-forge libcblas 3.9.0 16_win64_mkl conda-forge libcrc32c 1.1.2 h0e60522_0 conda-forge libcurl 7.88.1 h68f0423_1 conda-forge libexpat 2.5.0 h63175ca_1 conda-forge libffi 3.4.2 h8ffe710_5 conda-forge libgoogle-cloud 2.8.0 hf2ff781_1 conda-forge libgrpc 1.52.1 h32da247_1 conda-forge libhwloc 2.9.0 h51c2c0f_0 conda-forge libiconv 1.17 h8ffe710_0 conda-forge liblapack 3.9.0 16_win64_mkl conda-forge libprotobuf 3.21.12 h12be248_0 conda-forge libsqlite 3.40.0 hcfcfb64_0 conda-forge libssh2 1.10.0 h9a1e1f7_3 conda-forge libthrift 0.18.1 h9ce19ad_0 conda-forge libutf8proc 2.8.0 h82a8f57_0 conda-forge libxml2 2.10.3 hc3477c8_6 conda-forge libzlib 1.2.13 hcfcfb64_4 conda-forge lz4-c 1.9.4 hcfcfb64_0 conda-forge mkl 2022.1.0 h6a75c08_874 conda-forge multidict 6.0.4 py311ha68e1ae_0 conda-forge multiprocess 0.70.14 py311ha68e1ae_3 conda-forge numpy 1.24.2 py311h0b4df5a_0 conda-forge openssl 3.1.0 hcfcfb64_0 conda-forge orc 1.8.3 hada7b9e_0 conda-forge packaging 23.0 pyhd8ed1ab_0 conda-forge pandas 2.0.0 py311hf63dbb6_0 conda-forge parquet-cpp 1.5.1 2 conda-forge pip 23.0.1 pyhd8ed1ab_0 conda-forge pthreads-win32 2.9.1 hfa6e2cd_3 conda-forge pyarrow 11.0.0 py311h6a6099b_13_cpu conda-forge pycparser 2.21 pyhd8ed1ab_0 conda-forge pyopenssl 23.1.1 pyhd8ed1ab_0 conda-forge pysocks 1.7.1 pyh0701188_6 conda-forge python 3.11.3 h2628c8c_0_cpython conda-forge python-dateutil 2.8.2 pyhd8ed1ab_0 conda-forge python-tzdata 2023.3 pyhd8ed1ab_0 conda-forge python-xxhash 3.2.0 py311ha68e1ae_0 conda-forge python_abi 3.11 3_cp311 conda-forge pytz 2023.3 pyhd8ed1ab_0 conda-forge pyyaml 6.0 py311ha68e1ae_5 conda-forge re2 2023.02.02 h63175ca_0 conda-forge requests 2.28.2 pyhd8ed1ab_1 conda-forge setuptools 67.6.1 pyhd8ed1ab_0 conda-forge six 1.16.0 pyh6c4a22f_0 conda-forge snappy 1.1.10 hfb803bf_0 conda-forge tbb 2021.8.0 h91493d7_0 conda-forge tk 8.6.12 h8ffe710_0 conda-forge tqdm 4.65.0 pyhd8ed1ab_1 conda-forge typing-extensions 4.5.0 hd8ed1ab_0 conda-forge typing_extensions 4.5.0 pyha770c72_0 conda-forge tzdata 2023c h71feb2d_0 conda-forge ucrt 10.0.22621.0 h57928b3_0 conda-forge urllib3 1.26.15 pyhd8ed1ab_0 conda-forge vc 14.3 hb6edc58_10 conda-forge vs2015_runtime 14.34.31931 h4c5c07a_10 conda-forge wheel 0.40.0 pyhd8ed1ab_0 conda-forge win_inet_pton 1.1.0 pyhd8ed1ab_6 conda-forge xxhash 0.8.1 hcfcfb64_0 conda-forge xz 5.2.10 h8cc25b3_1 yaml 0.2.5 h8ffe710_2 conda-forge yarl 1.8.2 py311ha68e1ae_0 conda-forge zipp 3.15.0 pyhd8ed1ab_0 conda-forge zlib 1.2.13 hcfcfb64_4 conda-forge zstd 1.5.4 hd43e919_0 ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5727/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5727/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5726
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5726/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5726/comments
https://api.github.com/repos/huggingface/datasets/issues/5726/events
https://github.com/huggingface/datasets/issues/5726
1,660,944,807
I_kwDODunzps5jAAGn
5,726
Fallback JSON Dataset loading does not load all values when features specified manually
{ "login": "myluki2000", "id": 3610788, "node_id": "MDQ6VXNlcjM2MTA3ODg=", "avatar_url": "https://avatars.githubusercontent.com/u/3610788?v=4", "gravatar_id": "", "url": "https://api.github.com/users/myluki2000", "html_url": "https://github.com/myluki2000", "followers_url": "https://api.github.com/users/myluki2000/followers", "following_url": "https://api.github.com/users/myluki2000/following{/other_user}", "gists_url": "https://api.github.com/users/myluki2000/gists{/gist_id}", "starred_url": "https://api.github.com/users/myluki2000/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/myluki2000/subscriptions", "organizations_url": "https://api.github.com/users/myluki2000/orgs", "repos_url": "https://api.github.com/users/myluki2000/repos", "events_url": "https://api.github.com/users/myluki2000/events{/privacy}", "received_events_url": "https://api.github.com/users/myluki2000/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting, @myluki2000.\r\n\r\nI am working on a fix." ]
"2023-04-10T15:22:14"
"2023-04-21T06:35:28"
"2023-04-21T06:35:28"
NONE
null
### Describe the bug The fallback JSON dataset loader located here: https://github.com/huggingface/datasets/blob/1c4ec00511868bd881e84a6f7e0333648d833b8e/src/datasets/packaged_modules/json/json.py#L130-L153 does not load the values of features correctly when features are specified manually and not all features have a value in the first entry of the dataset. I'm pretty sure this is not supposed to be expected bahavior? To fix this you'd have to change this line: https://github.com/huggingface/datasets/blob/1c4ec00511868bd881e84a6f7e0333648d833b8e/src/datasets/packaged_modules/json/json.py#L140 To pass a schema to pyarrow which has the same structure as the features argument passed to the load_dataset() method. ### Steps to reproduce the bug Consider a dataset JSON like this: ``` [ { "instruction": "Do stuff", "output": "Answer stuff" }, { "instruction": "Do stuff2", "input": "Additional Input2", "output": "Answer stuff2" } ] ``` Using this code to load the dataset: ``` from datasets import load_dataset, Features, Value features = { "instruction": Value("string"), "input": Value("string"), "output": Value("string") } features = Features(features) ds = load_dataset("json", data_files="./ds.json", features=features) for row in ds["train"]: print(row) ``` we get a dataset that looks like this: | **Instruction** | **Input** | **Output** | |-----------------|--------------------|-----------------| | "Do stuff" | None | "Answer Stuff" | | "Do stuff2" | None | "Answer Stuff2" | ### Expected behavior The input column should contain values other than None for dataset entries that have the "input" attribute set: | **Instruction** | **Input** | **Output** | |-----------------|--------------------|-----------------| | "Do stuff" | None | "Answer Stuff" | | "Do stuff2" | "Additional Input2" | "Answer Stuff2" | ### Environment info Python 3.10.10 Datasets 2.11.0 Windows 10
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5726/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5726/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5725
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5725/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5725/comments
https://api.github.com/repos/huggingface/datasets/issues/5725/events
https://github.com/huggingface/datasets/issues/5725
1,660,455,202
I_kwDODunzps5i-Iki
5,725
How to limit the number of examples in dataset, for testing?
{ "login": "ndvbd", "id": 845175, "node_id": "MDQ6VXNlcjg0NTE3NQ==", "avatar_url": "https://avatars.githubusercontent.com/u/845175?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ndvbd", "html_url": "https://github.com/ndvbd", "followers_url": "https://api.github.com/users/ndvbd/followers", "following_url": "https://api.github.com/users/ndvbd/following{/other_user}", "gists_url": "https://api.github.com/users/ndvbd/gists{/gist_id}", "starred_url": "https://api.github.com/users/ndvbd/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ndvbd/subscriptions", "organizations_url": "https://api.github.com/users/ndvbd/orgs", "repos_url": "https://api.github.com/users/ndvbd/repos", "events_url": "https://api.github.com/users/ndvbd/events{/privacy}", "received_events_url": "https://api.github.com/users/ndvbd/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi! You can use the `nrows` parameter for this:\r\n```python\r\ndata = load_dataset(\"json\", data_files=data_path, nrows=10)\r\n```", "@mariosasko I get:\r\n\r\n`TypeError: __init__() got an unexpected keyword argument 'nrows'`", "I misread the format in which the dataset is stored - the `nrows` parameter works for CSV, but not JSON.\r\n\r\nThis means the only option is first to create a DataFrame and then convert it to a Dataset object:\r\n```python\r\nimport pandas as pd\r\nfrom datasets import Dataset\r\n\r\ndf = pd.read_json(data_path, lines=True, nrows=10)\r\nds = Dataset.from_pandas(df)\r\n```" ]
"2023-04-10T08:41:43"
"2023-04-21T06:16:24"
"2023-04-21T06:16:24"
NONE
null
### Describe the bug I am using this command: `data = load_dataset("json", data_files=data_path)` However, I want to add a parameter, to limit the number of loaded examples to be 10, for development purposes, but can't find this simple parameter. ### Steps to reproduce the bug In the description. ### Expected behavior To be able to limit the number of examples ### Environment info Nothing special
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5725/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5725/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5724
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5724/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5724/comments
https://api.github.com/repos/huggingface/datasets/issues/5724/events
https://github.com/huggingface/datasets/issues/5724
1,659,938,135
I_kwDODunzps5i8KVX
5,724
Error after shuffling streaming IterableDatasets with downloaded dataset
{ "login": "szxiangjn", "id": 41177966, "node_id": "MDQ6VXNlcjQxMTc3OTY2", "avatar_url": "https://avatars.githubusercontent.com/u/41177966?v=4", "gravatar_id": "", "url": "https://api.github.com/users/szxiangjn", "html_url": "https://github.com/szxiangjn", "followers_url": "https://api.github.com/users/szxiangjn/followers", "following_url": "https://api.github.com/users/szxiangjn/following{/other_user}", "gists_url": "https://api.github.com/users/szxiangjn/gists{/gist_id}", "starred_url": "https://api.github.com/users/szxiangjn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/szxiangjn/subscriptions", "organizations_url": "https://api.github.com/users/szxiangjn/orgs", "repos_url": "https://api.github.com/users/szxiangjn/repos", "events_url": "https://api.github.com/users/szxiangjn/events{/privacy}", "received_events_url": "https://api.github.com/users/szxiangjn/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Moving `\"en\"` to the end of the path instead of passing it as a config name should fix the error:\r\n```python\r\nimport datasets\r\ndataset = datasets.load_dataset('/path/to/your/data/dir/en', streaming=True, split='train')\r\ndataset = dataset.shuffle(buffer_size=10_000, seed=42)\r\nnext(iter(dataset))\r\n```\r\n\r\nPS: https://github.com/huggingface/datasets/pull/5331, once merged, will allow us to define C4's configs in its README, making downloading it much more user-friendly." ]
"2023-04-09T16:58:44"
"2023-04-20T20:37:30"
"2023-04-20T20:37:30"
NONE
null
### Describe the bug I downloaded the C4 dataset, and used streaming IterableDatasets to read it. Everything went normal until I used `dataset = dataset.shuffle(seed=42, buffer_size=10_000)` to shuffle the dataset. Shuffled dataset will throw the following error when it is used by `next(iter(dataset))`: ``` File "/data/miniconda3/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 937, in __iter__ for key, example in ex_iterable: File "/data/miniconda3/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 627, in __iter__ for x in self.ex_iterable: File "/data/miniconda3/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 138, in __iter__ yield from self.generate_examples_fn(**kwargs_with_shuffled_shards) File "/data/miniconda3/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 763, in wrapper for key, table in generate_tables_fn(**kwargs): File "/data/miniconda3/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 101, in _generate_tables batch = f.read(self.config.chunksize) File "/data/miniconda3/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 372, in read_with_retries out = read(*args, **kwargs) File "/data/miniconda3/lib/python3.9/gzip.py", line 300, in read return self._buffer.read(size) File "/data/miniconda3/lib/python3.9/_compression.py", line 68, in readinto data = self.read(len(byte_view)) File "/data/miniconda3/lib/python3.9/gzip.py", line 487, in read if not self._read_gzip_header(): File "/data/miniconda3/lib/python3.9/gzip.py", line 435, in _read_gzip_header raise BadGzipFile('Not a gzipped file (%r)' % magic) gzip.BadGzipFile: Not a gzipped file (b've') ``` I found that there is no problem to use the dataset in this way without shuffling. Also, use `dataset = datasets.load_dataset('c4', 'en', split='train', streaming=True)`, which will download the dataset on-the-fly instead of loading from the local file, will also not have problems even after shuffle. ### Steps to reproduce the bug 1. Download C4 dataset from https://huggingface.co/datasets/allenai/c4 2. ``` import datasets dataset = datasets.load_dataset('/path/to/your/data/dir', 'en', streaming=True, split='train') dataset = dataset.shuffle(buffer_size=10_000, seed=42) next(iter(dataset)) ``` ### Expected behavior `next(iter(dataset))` should give me a sample from the dataset ### Environment info - `datasets` version: 2.11.0 - Platform: Linux-5.4.32-1-tlinux4-0001-x86_64-with-glibc2.28 - Python version: 3.9.16 - Huggingface_hub version: 0.13.1 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5724/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5724/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5722
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5722/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5722/comments
https://api.github.com/repos/huggingface/datasets/issues/5722/events
https://github.com/huggingface/datasets/issues/5722
1,659,837,510
I_kwDODunzps5i7xxG
5,722
Distributed Training Error on Customized Dataset
{ "login": "wlhgtc", "id": 16603773, "node_id": "MDQ6VXNlcjE2NjAzNzcz", "avatar_url": "https://avatars.githubusercontent.com/u/16603773?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wlhgtc", "html_url": "https://github.com/wlhgtc", "followers_url": "https://api.github.com/users/wlhgtc/followers", "following_url": "https://api.github.com/users/wlhgtc/following{/other_user}", "gists_url": "https://api.github.com/users/wlhgtc/gists{/gist_id}", "starred_url": "https://api.github.com/users/wlhgtc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wlhgtc/subscriptions", "organizations_url": "https://api.github.com/users/wlhgtc/orgs", "repos_url": "https://api.github.com/users/wlhgtc/repos", "events_url": "https://api.github.com/users/wlhgtc/events{/privacy}", "received_events_url": "https://api.github.com/users/wlhgtc/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hmm the error doesn't seem related to data loading.\r\n\r\nRegarding `split_dataset_by_node`: it's generally used to split an iterable dataset (e.g. when streaming) in pytorch DDP. It's not needed if you use a regular dataset since the pytorch DataLoader already assigns a subset of the dataset indices to each node." ]
"2023-04-09T11:04:59"
"2023-07-24T14:50:46"
"2023-07-24T14:50:46"
NONE
null
Hi guys, recently I tried to use `datasets` to train a dual encoder. I finish my own datasets according to the nice [tutorial](https://huggingface.co/docs/datasets/v2.11.0/en/dataset_script) Here are my code: ```python class RetrivalDataset(datasets.GeneratorBasedBuilder): """CrossEncoder dataset.""" BUILDER_CONFIGS = [RetrivalConfig(name="DuReader")] # DEFAULT_CONFIG_NAME = "DuReader" def _info(self): return datasets.DatasetInfo( features=datasets.Features( { "id": datasets.Value("string"), "question": datasets.Value("string"), "documents": Sequence(datasets.Value("string")), } ), supervised_keys=None, ) def _split_generators(self, dl_manager): """Returns SplitGenerators.""" train_file = self.config.data_dir + self.config.train_file valid_file = self.config.data_dir + self.config.valid_file logger.info(f"Training on {self.config.train_file}") logger.info(f"Evaluating on {self.config.valid_file}") return [ datasets.SplitGenerator( name=datasets.Split.TRAIN, gen_kwargs={"file_path": train_file} ), datasets.SplitGenerator( name=datasets.Split.VALIDATION, gen_kwargs={"file_path": valid_file} ), ] def _generate_examples(self, file_path): with jsonlines.open(file_path, "r") as f: for record in f: label = record["label"] question = record["question"] # dual encoder all_documents = record["all_documents"] positive_paragraph = all_documents.pop(label) all_documents = [positive_paragraph] + all_documents u_id = "{}_#_{}".format( md5_hash(question + "".join(all_documents)), "".join(random.sample(string.ascii_letters + string.digits, 7)), ) item = { "question": question, "documents": all_documents, "id": u_id, } yield u_id, item ``` It works well on single GPU, but got errors as follows when used DDP: ```python Detected mismatch between collectives on ranks. Rank 1 is running collective: CollectiveFingerPrint(OpType=BARRIER), but Rank 0 is running collective: CollectiveFingerPrint(OpType=ALLGATHER_COALESCED) ``` Here are my train script on a two A100 mechine: ```bash export TORCH_DISTRIBUTED_DEBUG=DETAIL export TORCH_SHOW_CPP_STACKTRACES=1 export NCCL_DEBUG=INFO export NCCL_DEBUG_SUBSYS=INIT,COLL,ENV nohup torchrun --nproc_per_node 2 train.py experiments/de-big.json >logs/de-big.log 2>&1& ``` I am not sure if this error below related to my dataset code when use DDP. And I notice the PR(#5369 ), but I don't know when and where should I used the function(`split_dataset_by_node`) . @lhoestq hope you could help me?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5722/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5722/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5721
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5721/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5721/comments
https://api.github.com/repos/huggingface/datasets/issues/5721/events
https://github.com/huggingface/datasets/issues/5721
1,659,680,682
I_kwDODunzps5i7Leq
5,721
Calling datasets.load_dataset("text" ...) results in a wrong split.
{ "login": "cyrilzakka", "id": 1841186, "node_id": "MDQ6VXNlcjE4NDExODY=", "avatar_url": "https://avatars.githubusercontent.com/u/1841186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cyrilzakka", "html_url": "https://github.com/cyrilzakka", "followers_url": "https://api.github.com/users/cyrilzakka/followers", "following_url": "https://api.github.com/users/cyrilzakka/following{/other_user}", "gists_url": "https://api.github.com/users/cyrilzakka/gists{/gist_id}", "starred_url": "https://api.github.com/users/cyrilzakka/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cyrilzakka/subscriptions", "organizations_url": "https://api.github.com/users/cyrilzakka/orgs", "repos_url": "https://api.github.com/users/cyrilzakka/repos", "events_url": "https://api.github.com/users/cyrilzakka/events{/privacy}", "received_events_url": "https://api.github.com/users/cyrilzakka/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
"2023-04-08T23:55:12"
"2023-04-08T23:55:12"
null
NONE
null
### Describe the bug When creating a text dataset, the training split should have the bulk of the examples by default. Currently, testing does. ### Steps to reproduce the bug I have a folder with 18K text files in it. Each text file essentially consists in a document or article scraped from online. Calling the following codeL ``` folder_path = "/home/cyril/Downloads/llama_dataset" data = datasets.load_dataset("text", data_dir=folder_path) data.save_to_disk("/home/cyril/Downloads/data.hf") data = datasets.load_from_disk("/home/cyril/Downloads/data.hf") print(data) ``` Results in the following split: ``` DatasetDict({ train: Dataset({ features: ['text'], num_rows: 2114 }) test: Dataset({ features: ['text'], num_rows: 200882 }) validation: Dataset({ features: ['text'], num_rows: 152 }) }) ``` It seems to me like the train/test/validation splits are in the wrong order since test split >>>> train_split ### Expected behavior Train split should have the bulk of the training examples. ### Environment info datasets 2.11.0, python 3.10.6
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5721/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5721/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5720
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5720/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5720/comments
https://api.github.com/repos/huggingface/datasets/issues/5720/events
https://github.com/huggingface/datasets/issues/5720
1,659,610,705
I_kwDODunzps5i66ZR
5,720
Streaming IterableDatasets do not work with torch DataLoaders
{ "login": "jlehrer1", "id": 29244648, "node_id": "MDQ6VXNlcjI5MjQ0NjQ4", "avatar_url": "https://avatars.githubusercontent.com/u/29244648?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jlehrer1", "html_url": "https://github.com/jlehrer1", "followers_url": "https://api.github.com/users/jlehrer1/followers", "following_url": "https://api.github.com/users/jlehrer1/following{/other_user}", "gists_url": "https://api.github.com/users/jlehrer1/gists{/gist_id}", "starred_url": "https://api.github.com/users/jlehrer1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jlehrer1/subscriptions", "organizations_url": "https://api.github.com/users/jlehrer1/orgs", "repos_url": "https://api.github.com/users/jlehrer1/repos", "events_url": "https://api.github.com/users/jlehrer1/events{/privacy}", "received_events_url": "https://api.github.com/users/jlehrer1/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Edit: This behavior is true even without `.take/.set`", "I'm experiencing the same problem that @jlehrer1. I was able to reproduce it with a very small example:\r\n\r\n```py\r\nfrom datasets import Dataset, load_dataset, load_dataset_builder\r\nfrom torch.utils.data import DataLoader\r\n\r\n\r\ndef my_gen():\r\n for i in range(1, 4):\r\n yield {\"a\": i}\r\n\r\n# Saving the dataset as a parquet file\r\ndataset = Dataset.from_generator(my_gen)\r\ntrain_path = \"/tmp/test.parquet\"\r\ndataset.to_parquet(train_path)\r\n\r\n# Creating a local dataset from the parquet file\r\ndata_files = {\"train\": [str(train_path)]}\r\nbuilder = load_dataset_builder(\"parquet\", data_files=data_files)\r\nbuilder.download_and_prepare(\"/tmp/test_ds\", file_format=\"parquet\")\r\n\r\n# Loading the dataset from the local directory as streaming\r\ndataset = load_dataset(\"parquet\", data_dir=\"/tmp/test_ds\", split=\"train\", streaming=True)\r\ndataset.with_format(\"torch\")\r\n\r\ndl = DataLoader(dataset, batch_size=2, num_workers=1)\r\nfor row in dl:\r\n print(row)\r\n```\r\n\r\nMy env info:\r\n```\r\ndatasets 2.11.0\r\ntorch 2.0.0\r\ntorchvision 0.15.1\r\nPython 3.9.16\r\n```\r\n\r\nNote that the example above doesn't fail if the number of workers used is `0`", "I cannot reproduce this error, not even with your MRE @ivanprado (your env appears to be the same as Colab's, and your code runs there without issues). ", "@mariosasko you are right, it works on Colab. I digged deeper and found that the problem arises when the multiprocessing method is set to be `spawn`. This code reproduces the problem in Colab:\r\n\r\n```py\r\nfrom datasets import Dataset, load_dataset, load_dataset_builder\r\nfrom torch.utils.data import DataLoader\r\nimport multiprocessing as mp\r\n\r\nmp.set_start_method('spawn')\r\n\r\ndef my_gen():\r\n for i in range(1, 4):\r\n yield {\"a\": i}\r\n\r\n\r\ndef main():\r\n # Saving the dataset as a parquet file\r\n dataset = Dataset.from_generator(my_gen)\r\n train_path = \"/tmp/test.parquet\"\r\n dataset.to_parquet(train_path)\r\n\r\n # Creating a local dataset from the parquet file\r\n data_files = {\"train\": [str(train_path)]}\r\n builder = load_dataset_builder(\"parquet\", data_files=data_files)\r\n builder.download_and_prepare(\"/tmp/test_ds\", file_format=\"parquet\")\r\n\r\n # Loading the dataset from the local directory as streaming\r\n dataset = load_dataset(\"parquet\", data_dir=\"/tmp/test_ds\", split=\"train\", streaming=True)\r\n dataset.with_format(\"torch\")\r\n\r\n dl = DataLoader(dataset, batch_size=2, num_workers=1)\r\n for row in dl:\r\n print(row)\r\n\r\nmain()\r\n```", "So is there a way to fix this by changing the `mp` method? This is blocking any usage of the `datasets` library for me", "@jlehrer1 can you try adding `mp.set_start_method('fork')` at the beginning of your code? Maybe this helps you. Keep us posted. ", "I have a similar issue: \r\n> mp.set_start_method('fork')\r\n\r\n\r\nDidnt work" ]
"2023-04-08T18:45:48"
"2023-05-27T12:57:08"
null
NONE
null
### Describe the bug When using streaming datasets set up with train/val split using `.skip()` and `.take()`, the following error occurs when iterating over a torch dataloader: ``` File "/Users/julian/miniconda3/envs/sims/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 363, in __iter__ self._iterator = self._get_iterator() File "/Users/julian/miniconda3/envs/sims/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 314, in _get_iterator return _MultiProcessingDataLoaderIter(self) File "/Users/julian/miniconda3/envs/sims/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 927, in __init__ w.start() File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/process.py", line 121, in start self._popen = self._Popen(self) File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/context.py", line 224, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/context.py", line 284, in _Popen return Popen(process_obj) File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/popen_spawn_posix.py", line 32, in __init__ super().__init__(process_obj) File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/popen_fork.py", line 19, in __init__ self._launch(process_obj) File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/popen_spawn_posix.py", line 47, in _launch reduction.dump(process_obj, fp) File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) AttributeError: Can't pickle local object '_generate_examples_from_tables_wrapper.<locals>.wrapper' ``` To reproduce, run the code ``` from datasets import load_dataset data = load_dataset(args.dataset_name, split="train", streaming=True) train_len = 5000 val_len = 100 train, val = data.take(train_len), data.skip(train_len).take(val_len) traindata = IterableClipDataset(data, context_length=args.max_len, tokenizer=tokenizer, image_key="url", text_key="text") traindata = DataLoader(traindata, batch_size=args.batch_size, num_workers=args.num_workers, persistent_workers=True) ``` Where the class IterableClipDataset is a simple wrapper to cast the dataset to a torch iterabledataset, defined via ``` from torch.utils.data import Dataset, IterableDataset from torchvision.transforms import Compose, Resize, ToTensor from transformers import AutoTokenizer import requests from PIL import Image class IterableClipDataset(IterableDataset): def __init__(self, dataset, context_length: int, image_transform=None, tokenizer=None, image_key="image", text_key="text"): self.dataset = dataset self.context_length = context_length self.image_transform = Compose([Resize((224, 224)), ToTensor()]) if image_transform is None else image_transform self.tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") if tokenizer is None else tokenizer self.image_key = image_key self.text_key = text_key def read_image(self, url: str): try: # Try to read the image image = Image.open(requests.get(url, stream=True).raw) except: image = Image.new("RGB", (224, 224), (0, 0, 0)) return image def process_sample(self, image, text): if isinstance(image, str): image = self.read_image(image) if self.image_transform is not None: image = self.image_transform(image) text = self.tokenizer.encode( text, add_special_tokens=True, max_length=self.context_length, truncation=True, padding="max_length" ) text = torch.tensor(text, dtype=torch.long) return image, text def __iter__(self): for sample in self.dataset: image, text = sample[self.image_key], sample[self.text_key] yield self.process_sample(image, text) ``` ### Steps to reproduce the bug Steps to reproduce 1. Install `datasets`, `torch`, and `PIL` (if you want to reproduce exactly) 2. Run the code above ### Expected behavior Batched data is produced from the dataloader ### Environment info ``` datasets == 2.9.0 python == 3.9.12 torch == 1.11.0 ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5720/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5720/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5719
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5719/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5719/comments
https://api.github.com/repos/huggingface/datasets/issues/5719/events
https://github.com/huggingface/datasets/issues/5719
1,659,203,222
I_kwDODunzps5i5W6W
5,719
Array2D feature creates a list of list instead of a numpy array
{ "login": "offchan42", "id": 15215732, "node_id": "MDQ6VXNlcjE1MjE1NzMy", "avatar_url": "https://avatars.githubusercontent.com/u/15215732?v=4", "gravatar_id": "", "url": "https://api.github.com/users/offchan42", "html_url": "https://github.com/offchan42", "followers_url": "https://api.github.com/users/offchan42/followers", "following_url": "https://api.github.com/users/offchan42/following{/other_user}", "gists_url": "https://api.github.com/users/offchan42/gists{/gist_id}", "starred_url": "https://api.github.com/users/offchan42/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/offchan42/subscriptions", "organizations_url": "https://api.github.com/users/offchan42/orgs", "repos_url": "https://api.github.com/users/offchan42/repos", "events_url": "https://api.github.com/users/offchan42/events{/privacy}", "received_events_url": "https://api.github.com/users/offchan42/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi! \r\n\r\nYou need to set the format to `np` before indexing the dataset to get NumPy arrays:\r\n```python\r\nfeatures = Features(dict(seq=Array2D((2,2), 'float32'))) \r\nds = Dataset.from_dict(dict(seq=[np.random.rand(2,2)]), features=features)\r\nds.set_format(\"np\")\r\na = ds[0]['seq']\r\n```\r\n\r\n> I think it should not be the expected behavior especially when I feed a numpy array as input to the data creation function. Why is it converting my array into a list?\r\n\r\nThe same dataset can have examples in different types (Numpy arrays, Torch tensors, Pandas series, etc.), so recovering them all would be slow and impractical. Instead, the design of our formatting API is similar to Arrow's (the lib we use internally to store data on disk/ in RAM), which allows converting a batch of data to Python/Numpy/Pandas in a single call (and uses C++ to do so to make it faster).\r\n\r\n> Also if I change the first dimension of the Array2D shape to None, it's returning array correctly.\r\n\r\nSetting the first dimension to `None` makes it variable-length (allows passing arrays with the first dimensions of differing lengths).\r\n", "Current behavior when indexing the dataset:\r\n- Using `Array((2,2))` returns a list of lists.\r\n- Using `Array((None,2))` returns a numpy array.\r\n\r\nDon't you think this is kind of unexpected behavior from end-user perspective? \r\nAs a user, I expect that when I use `Array2D`, the behavior needs to be consistent even if I specify None or not. It should either return a list or an array. It needs to choose one. Let's say if it always return a list, then I will call `ds.set_format('np')` no problem.\r\n\r\nThe consistency can be in any of these aspects:\r\n1. preserves the type of the input data (in this case, a numpy array)\r\n2. ensure the output type is always the same (it can be either list or array, but it needs to be one of them)\r\n\r\nRight now the API doesn't conform to any of these aspects. But I think it needs to conform to one.", "I thought we made this consistent by returning lists in both scenarios...", "Fixed in #5751 " ]
"2023-04-07T21:04:08"
"2023-04-20T15:34:41"
"2023-04-20T15:34:41"
NONE
null
### Describe the bug I'm not sure if this is expected behavior or not. When I create a 2D array using `Array2D`, the data has list type instead of numpy array. I think it should not be the expected behavior especially when I feed a numpy array as input to the data creation function. Why is it converting my array into a list? Also if I change the first dimension of the `Array2D` shape to None, it's returning array correctly. ### Steps to reproduce the bug Run this code: ```py from datasets import Dataset, Features, Array2D import numpy as np # you have to change the first dimension of the shape to None to make it return an array features = Features(dict(seq=Array2D((2,2), 'float32'))) ds = Dataset.from_dict(dict(seq=[np.random.rand(2,2)]), features=features) a = ds[0]['seq'] print(a) print(type(a)) ``` The following will be printed in stdout: ``` [[0.8127174377441406, 0.3760348856449127], [0.7510159611701965, 0.4322739541530609]] <class 'list'> ``` ### Expected behavior Each indexed item should be a list or numpy array. Currently, `Array((2,2))` yields a list but `Array((None,2))` yields an array. ### Environment info - `datasets` version: 2.11.0 - Platform: Windows-10-10.0.19045-SP0 - Python version: 3.9.13 - Huggingface_hub version: 0.13.4 - PyArrow version: 11.0.0 - Pandas version: 1.4.4
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5719/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5719/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5718
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5718/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5718/comments
https://api.github.com/repos/huggingface/datasets/issues/5718/events
https://github.com/huggingface/datasets/pull/5718
1,658,958,406
PR_kwDODunzps5N2IZC
5,718
Reorder default data splits to have validation before test
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "After this CI error: https://github.com/huggingface/datasets/actions/runs/4639528358/jobs/8210492953?pr=5718\r\n```\r\nFAILED tests/test_data_files.py::test_get_data_files_patterns[data_file_per_split4] - AssertionError: assert ['random', 'train'] == ['train', 'random']\r\n At index 0 diff: 'random' != 'train'\r\n Full diff:\r\n - ['train', 'random']\r\n + ['random', 'train']\r\n```\r\nI have checked locally and found out that the data split order is nondeterministic. I am addressing this in a separate issue.\r\n\r\nWe should first address:\r\n- #5728 \r\n- #5729", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007728 / 0.011353 (-0.003624) | 0.005275 / 0.011008 (-0.005734) | 0.097708 / 0.038508 (0.059199) | 0.039851 / 0.023109 (0.016741) | 0.333360 / 0.275898 (0.057462) | 0.376135 / 0.323480 (0.052655) | 0.006355 / 0.007986 (-0.001630) | 0.004193 / 0.004328 (-0.000135) | 0.072882 / 0.004250 (0.068631) | 0.052668 / 0.037052 (0.015615) | 0.347359 / 0.258489 (0.088870) | 0.382280 / 0.293841 (0.088440) | 0.035996 / 0.128546 (-0.092550) | 0.012517 / 0.075646 (-0.063129) | 0.334520 / 0.419271 (-0.084751) | 0.051969 / 0.043533 (0.008436) | 0.335735 / 0.255139 (0.080596) | 0.359921 / 0.283200 (0.076722) | 0.113971 / 0.141683 (-0.027712) | 1.465636 / 1.452155 (0.013481) | 1.559824 / 1.492716 (0.067108) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223997 / 0.018006 (0.205991) | 0.499041 / 0.000490 (0.498551) | 0.009697 / 0.000200 (0.009497) | 0.000245 / 0.000054 (0.000190) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027031 / 0.037411 (-0.010381) | 0.110271 / 0.014526 (0.095745) | 0.115848 / 0.176557 (-0.060709) | 0.174253 / 0.737135 (-0.562883) | 0.122616 / 0.296338 (-0.173723) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417275 / 0.215209 (0.202066) | 4.158678 / 2.077655 (2.081023) | 1.917585 / 1.504120 (0.413465) | 1.722219 / 1.541195 (0.181025) | 1.813284 / 1.468490 (0.344793) | 0.707193 / 4.584777 (-3.877584) | 3.853545 / 3.745712 (0.107833) | 3.369240 / 5.269862 (-1.900621) | 1.820264 / 4.565676 (-2.745412) | 0.087340 / 0.424275 (-0.336936) | 0.012305 / 0.007607 (0.004698) | 0.520326 / 0.226044 (0.294281) | 5.107383 / 2.268929 (2.838455) | 2.413977 / 55.444624 (-53.030647) | 2.074356 / 6.876477 (-4.802121) | 2.255959 / 2.142072 (0.113887) | 0.849850 / 4.805227 (-3.955377) | 0.170116 / 6.500664 (-6.330548) | 0.067203 / 0.075469 (-0.008267) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.168158 / 1.841788 (-0.673629) | 15.046312 / 8.074308 (6.972004) | 15.113924 / 10.191392 (4.922532) | 0.145288 / 0.680424 (-0.535136) | 0.017959 / 0.534201 (-0.516242) | 0.424666 / 0.579283 (-0.154617) | 0.422560 / 0.434364 (-0.011804) | 0.526386 / 0.540337 (-0.013952) | 0.623755 / 1.386936 (-0.763181) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007676 / 0.011353 (-0.003677) | 0.005240 / 0.011008 (-0.005769) | 0.074668 / 0.038508 (0.036160) | 0.035570 / 0.023109 (0.012461) | 0.348524 / 0.275898 (0.072626) | 0.378157 / 0.323480 (0.054677) | 0.006112 / 0.007986 (-0.001873) | 0.005641 / 0.004328 (0.001312) | 0.073536 / 0.004250 (0.069286) | 0.048651 / 0.037052 (0.011599) | 0.359282 / 0.258489 (0.100793) | 0.385961 / 0.293841 (0.092120) | 0.035417 / 0.128546 (-0.093129) | 0.012227 / 0.075646 (-0.063419) | 0.085725 / 0.419271 (-0.333546) | 0.049651 / 0.043533 (0.006118) | 0.344122 / 0.255139 (0.088983) | 0.364795 / 0.283200 (0.081595) | 0.112711 / 0.141683 (-0.028972) | 1.426823 / 1.452155 (-0.025332) | 1.534745 / 1.492716 (0.042029) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.201728 / 0.018006 (0.183721) | 0.448533 / 0.000490 (0.448043) | 0.003554 / 0.000200 (0.003354) | 0.000092 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030917 / 0.037411 (-0.006494) | 0.117966 / 0.014526 (0.103440) | 0.125954 / 0.176557 (-0.050602) | 0.176382 / 0.737135 (-0.560753) | 0.130757 / 0.296338 (-0.165582) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422167 / 0.215209 (0.206958) | 4.213948 / 2.077655 (2.136294) | 2.040049 / 1.504120 (0.535929) | 1.858317 / 1.541195 (0.317122) | 1.937108 / 1.468490 (0.468618) | 0.707797 / 4.584777 (-3.876979) | 3.831061 / 3.745712 (0.085349) | 3.373711 / 5.269862 (-1.896151) | 1.590343 / 4.565676 (-2.975333) | 0.086672 / 0.424275 (-0.337603) | 0.012429 / 0.007607 (0.004821) | 0.520269 / 0.226044 (0.294225) | 5.207285 / 2.268929 (2.938357) | 2.518107 / 55.444624 (-52.926517) | 2.230696 / 6.876477 (-4.645781) | 2.363164 / 2.142072 (0.221091) | 0.836749 / 4.805227 (-3.968479) | 0.169676 / 6.500664 (-6.330988) | 0.065766 / 0.075469 (-0.009703) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.251195 / 1.841788 (-0.590592) | 15.196091 / 8.074308 (7.121782) | 14.991600 / 10.191392 (4.800208) | 0.165335 / 0.680424 (-0.515089) | 0.017789 / 0.534201 (-0.516412) | 0.433863 / 0.579283 (-0.145420) | 0.428660 / 0.434364 (-0.005704) | 0.527385 / 0.540337 (-0.012952) | 0.628067 / 1.386936 (-0.758869) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d06b8c21ba98ae85971a2b1d135ac2ef035b59c9 \"CML watermark\")\n" ]
"2023-04-07T16:01:26"
"2023-04-27T14:43:13"
"2023-04-27T14:35:52"
MEMBER
null
This PR reorders data splits, so that by default validation appears before test. The default order becomes: [train, validation, test] instead of [train, test, validation].
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5718/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5718/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5718", "html_url": "https://github.com/huggingface/datasets/pull/5718", "diff_url": "https://github.com/huggingface/datasets/pull/5718.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5718.patch", "merged_at": "2023-04-27T14:35:52" }
true
https://api.github.com/repos/huggingface/datasets/issues/5717
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5717/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5717/comments
https://api.github.com/repos/huggingface/datasets/issues/5717/events
https://github.com/huggingface/datasets/issues/5717
1,658,729,866
I_kwDODunzps5i3jWK
5,717
Errror when saving to disk a dataset of images
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
[ "Looks like as long as the number of shards makes a batch lower than 1000 images it works. In my training set I have 40K images. If I use `num_shards=40` (batch of 1000 images) I get the error, but if I update it to `num_shards=50` (batch of 800 images) it works.\r\n\r\nI will be happy to share my dataset privately if it can help to better debug.", "Hi! I didn't manage to reproduce this behavior, so sharing the dataset with us would help a lot. \r\n\r\n> My dataset is around 50K images, is this error might be due to a bad image?\r\n\r\nThis shouldn't be the case as we save raw data to disk without decoding it.", "OK, thanks! The dataset is currently hosted on a gcs bucket. How would you like to proceed for sharing the link? ", "You could follow [this](https://cloud.google.com/storage/docs/collaboration#browser) procedure or upload the dataset to Google Drive (50K images is not that much unless high-res) and send me an email with the link.", "Thanks @mariosasko. I just sent you the GDrive link.", "Thanks @jplu! I managed to reproduce the `TypeError` - it stems from [this](https://github.com/huggingface/datasets/blob/e3f4f124a1b118a5bfff5bae76b25a68aedbebbc/src/datasets/features/image.py#L258-L264) line, which can return a `ChunkedArray` (its mask is also chunked then, which Arrow does not allow) when the embedded data is too big to fit in a standard `Array`.\r\n\r\nI'm working on a fix.", "@yairl-dn You should be able to bypass this issue by reducing `datasets.config.DEFAULT_MAX_BATCH_SIZE` (1000 by default)\r\n\r\nIn Datasets 3.0, the Image storage format will be simplified, so this should be easier to fix then.", "The same error occurs with my save_to_disk() of Audio() items. I still get it with:\r\n```python\r\nimport datasets\r\ndatasets.config.DEFAULT_MAX_BATCH_SIZE=35\r\nfrom datasets import Features, Array2D, Value, Dataset, Sequence, Audio\r\n```\r\n\r\n```\r\nSaving the dataset (41/47 shards): 88%|██████████████████████████████████████████▉ | 297/339 [01:21<00:11, 3.65 examples/s]\r\nTraceback (most recent call last):\r\nFile \"/mnt/ddrive/prj/voice/voice-training-dataset-create/./dataset.py\", line 155, in <module>\r\ncreate_dataset(args)\r\nFile \"/mnt/ddrive/prj/voice/voice-training-dataset-create/./dataset.py\", line 137, in create_dataset\r\nhf_dataset.save_to_disk(args.outds)\r\nFile \"/home/j/src/py/datasets/src/datasets/arrow_dataset.py\", line 1532, in save_to_disk\r\nfor job_id, done, content in Dataset._save_to_disk_single(**kwargs):\r\nFile \"/home/j/src/py/datasets/src/datasets/arrow_dataset.py\", line 1563, in _save_to_disk_single\r\nwriter.write_table(pa_table)\r\nFile \"/home/j/src/py/datasets/src/datasets/arrow_writer.py\", line 574, in write_table\r\npa_table = embed_table_storage(pa_table)\r\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nFile \"/home/j/src/py/datasets/src/datasets/table.py\", line 2307, in embed_table_storage\r\narrays = [\r\n^\r\nFile \"/home/j/src/py/datasets/src/datasets/table.py\", line 2308, in <listcomp>\r\nembed_array_storage(table[name], feature) if require_storage_embed(feature) else table[name]\r\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nFile \"/home/j/src/py/datasets/src/datasets/table.py\", line 1831, in wrapper\r\nreturn pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])\r\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nFile \"/home/j/src/py/datasets/src/datasets/table.py\", line 1831, in <listcomp>\r\nreturn pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])\r\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nFile \"/home/j/src/py/datasets/src/datasets/table.py\", line 2177, in embed_array_storage\r\nreturn feature.embed_storage(array)\r\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nFile \"/home/j/src/py/datasets/src/datasets/features/audio.py\", line 276, in embed_storage\r\nstorage = pa.StructArray.from_arrays([bytes_array, path_array], [\"bytes\", \"path\"], mask=bytes_array.is_null())\r\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nFile \"pyarrow/array.pxi\", line 2850, in pyarrow.lib.StructArray.from_arrays\r\nFile \"pyarrow/array.pxi\", line 3290, in pyarrow.lib.c_mask_inverted_from_obj\r\nTypeError: Mask must be a pyarrow.Array of type boolean\r\n```", "Similar to @jaggzh, setting `datasets.config.DEFAULT_MAX_BATCH_SIZE` did not help in my case (same error here but for different dataset: https://github.com/Stanford-AIMI/RRG24/issues/2).\r\n\r\nThis is also reproducible with this open dataset: https://huggingface.co/datasets/nlphuji/winogavil/discussions/1\r\n\r\nHere's some code to do so:\r\n```python\r\nimport datasets\r\n\r\ndatasets.config.DEFAULT_MAX_BATCH_SIZE = 1\r\n\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset(\"nlphuji/winogavil\")\r\n\r\nds.save_to_disk(\"temp\")\r\n```\r\n\r\nI've done some more debugging with `datasets==2.18.0` (which incorporates PR #6283 as suggested by @lhoestq in the above dataset discussion), and it seems like the culprit might now be these lines: https://github.com/huggingface/datasets/blob/ca8409a8bec4508255b9c3e808d0751eb1005260/src/datasets/table.py#L2111-L2115\r\n\r\nFrom what I understand (and apologies I'm new to pyarrow), for an Image or Audio feature, these lines recursively call `embed_array_storage` for a list of either feature, ending up in the feature's `embed_storage` function. For all values in the list, `embed_storage` reads the bytes if they're not already loaded. The issue is the list being passed to the first recursive call is `array.values` which are the underlying values of `array` regardless of `array`'s slicing (as influenced by parameters such as `datasets.config.DEFAULT_MAX_BATCH_SIZE`). This results in the same overflowing list of bytes that result in the ChunkedArray being returned in `embed_storage`. Even if the array weren't to overflow and this code ran without throwing an exception, it still seems incorrect to load all values if you ultimately only want some subset with `ListArray.from_arrays(offsets, values)`; it seems some wasted effort if those values thrown out will get loaded again in the next batch and vice versa for the current batch of values during later batches.\r\n\r\nMaybe there's a fix where you could pass a mask to `embed_storage` such that it only loads the values you ultimately want for the current batch? Curious to see if you agree with this diagnosis of the problem and if you think this fix is viable @mariosasko?", "Would be nice if they have something similar to Dagshub's S3 sync; it worked like a charm for my bigger datasets.", "I guess also the proposed masking solution simply enables `datasets.config.DEFAULT_MAX_BATCH_SIZE` by reducing the number of elements loaded, it does not address the underlying problem of trying to load all the images as bytes into a pyarrow array.\r\n\r\nI'm happy to turn this into an actual PR but here's what I've implemented locally at `tables.py:embed_array_storage` to fix the above test case (`nlphuji/winogavil`) and my own use case:\r\n```python\r\n elif pa.types.is_list(array.type):\r\n # feature must be either [subfeature] or Sequence(subfeature)\r\n # Merge offsets with the null bitmap to avoid the \"Null bitmap with offsets slice not supported\" ArrowNotImplementedError\r\n array_offsets = _combine_list_array_offsets_with_mask(array)\r\n\r\n # mask underlying struct array so array_values.to_pylist()\r\n # fills None (see feature.embed_storage)\r\n idxs = np.arange(len(array.values))\r\n idxs = pa.ListArray.from_arrays(array_offsets, idxs).flatten()\r\n mask = np.ones(len(array.values)).astype(bool)\r\n mask[idxs] = False\r\n mask = pa.array(mask)\r\n # indexing 0 might be problematic but not sure\r\n # how else to get arbitrary keys from a struct array\r\n array_keys = array.values[0].keys()\r\n # is array.values always a struct array?\r\n array_values = pa.StructArray.from_arrays(\r\n arrays=[array.values.field(k) for k in array_keys],\r\n names=array_keys,\r\n mask=mask,\r\n )\r\n if isinstance(feature, list):\r\n return pa.ListArray.from_arrays(array_offsets, _e(array_values, feature[0]))\r\n if isinstance(feature, Sequence) and feature.length == -1:\r\n return pa.ListArray.from_arrays(array_offsets, _e(array_values, feature.feature))\r\n```\r\n\r\nAgain though I'm new to pyarrow so this might not be the cleanest implementation, also I'm really not sure if there are other cases where this solution doesn't work. Would love to get some feedback from the hf folks!", "I have the same issue, with an audio dataset where file sizes vary significantly (~0.2-200 mb). Reducing `datasets.config.DEFAULT_MAX_BATCH_SIZE` doesn't help." ]
"2023-04-07T11:59:17"
"2024-03-18T08:26:19"
null
CONTRIBUTOR
null
### Describe the bug Hello! I have an issue when I try to save on disk my dataset of images. The error I get is: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 1442, in save_to_disk for job_id, done, content in Dataset._save_to_disk_single(**kwargs): File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 1473, in _save_to_disk_single writer.write_table(pa_table) File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/arrow_writer.py", line 570, in write_table pa_table = embed_table_storage(pa_table) File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/table.py", line 2268, in embed_table_storage arrays = [ File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/table.py", line 2269, in <listcomp> embed_array_storage(table[name], feature) if require_storage_embed(feature) else table[name] File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/table.py", line 1817, in wrapper return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/table.py", line 1817, in <listcomp> return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/table.py", line 2142, in embed_array_storage return feature.embed_storage(array) File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/features/image.py", line 269, in embed_storage storage = pa.StructArray.from_arrays([bytes_array, path_array], ["bytes", "path"], mask=bytes_array.is_null()) File "pyarrow/array.pxi", line 2766, in pyarrow.lib.StructArray.from_arrays File "pyarrow/array.pxi", line 2961, in pyarrow.lib.c_mask_inverted_from_obj TypeError: Mask must be a pyarrow.Array of type boolean ``` My dataset is around 50K images, is this error might be due to a bad image? Thanks for the help. ### Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("imagefolder", data_dir="/path/to/dataset") dataset["train"].save_to_disk("./myds", num_shards=40) ``` ### Expected behavior Having my dataset properly saved to disk. ### Environment info - `datasets` version: 2.11.0 - Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 - Python version: 3.10.10 - Huggingface_hub version: 0.13.3 - PyArrow version: 11.0.0 - Pandas version: 2.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5717/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5717/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5716
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5716/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5716/comments
https://api.github.com/repos/huggingface/datasets/issues/5716/events
https://github.com/huggingface/datasets/issues/5716
1,658,613,092
I_kwDODunzps5i3G1k
5,716
Handle empty audio
{ "login": "v-yunbin", "id": 38179632, "node_id": "MDQ6VXNlcjM4MTc5NjMy", "avatar_url": "https://avatars.githubusercontent.com/u/38179632?v=4", "gravatar_id": "", "url": "https://api.github.com/users/v-yunbin", "html_url": "https://github.com/v-yunbin", "followers_url": "https://api.github.com/users/v-yunbin/followers", "following_url": "https://api.github.com/users/v-yunbin/following{/other_user}", "gists_url": "https://api.github.com/users/v-yunbin/gists{/gist_id}", "starred_url": "https://api.github.com/users/v-yunbin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/v-yunbin/subscriptions", "organizations_url": "https://api.github.com/users/v-yunbin/orgs", "repos_url": "https://api.github.com/users/v-yunbin/repos", "events_url": "https://api.github.com/users/v-yunbin/events{/privacy}", "received_events_url": "https://api.github.com/users/v-yunbin/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi! Can you share one of the problematic audio files with us?\r\n\r\nI tried to reproduce the error with the following code: \r\n```python\r\nimport soundfile as sf\r\nimport numpy as np\r\nfrom datasets import Audio\r\n\r\nsf.write(\"empty.wav\", np.array([]), 16000)\r\nAudio(sampling_rate=24000).decode_example({\"path\": \"empty.wav\", \"bytes\": None})\r\n```\r\nBut without success.\r\n\r\nAlso, what version of `librosa` is installed in your env? (You can get this info with `python -c \"import librosa; print(librosa.__version__)`)\r\n\r\n", "I'm closing this issue as the reproducer hasn't been provided." ]
"2023-04-07T09:51:40"
"2023-09-27T17:47:08"
"2023-09-27T17:47:08"
NONE
null
Some audio paths exist, but they are empty, and an error will be reported when reading the audio path.How to use the filter function to avoid the empty audio path? when a audio is empty, when do resample , it will break: `array, sampling_rate = sf.read(f) array = librosa.resample(array, orig_sr=sampling_rate, target_sr=self.sampling_rate)`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5716/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5716/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5715
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5715/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5715/comments
https://api.github.com/repos/huggingface/datasets/issues/5715/events
https://github.com/huggingface/datasets/issues/5715
1,657,479,788
I_kwDODunzps5iyyJs
5,715
Return Numpy Array (fixed length) Mode, in __get_item__, Instead of List
{ "login": "jungbaepark", "id": 34066771, "node_id": "MDQ6VXNlcjM0MDY2Nzcx", "avatar_url": "https://avatars.githubusercontent.com/u/34066771?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jungbaepark", "html_url": "https://github.com/jungbaepark", "followers_url": "https://api.github.com/users/jungbaepark/followers", "following_url": "https://api.github.com/users/jungbaepark/following{/other_user}", "gists_url": "https://api.github.com/users/jungbaepark/gists{/gist_id}", "starred_url": "https://api.github.com/users/jungbaepark/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jungbaepark/subscriptions", "organizations_url": "https://api.github.com/users/jungbaepark/orgs", "repos_url": "https://api.github.com/users/jungbaepark/repos", "events_url": "https://api.github.com/users/jungbaepark/events{/privacy}", "received_events_url": "https://api.github.com/users/jungbaepark/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "Hi! \r\n\r\nYou can use [`.set_format(\"np\")`](https://huggingface.co/docs/datasets/process#format) to get NumPy arrays (or Pytorch tensors with `.set_format(\"torch\")`) in `__getitem__`.\r\n\r\nAlso, have you been able to reproduce the linked PyTorch issue with a HF dataset?\r\n " ]
"2023-04-06T13:57:48"
"2023-04-20T17:16:26"
"2023-04-20T17:16:26"
NONE
null
### Feature request There are old known issues, but they can be easily forgettable problems in multiprocessing with pytorch-dataloader: Too high usage of RAM or shared-memory in pytorch when we set num workers > 1 and returning type of dataset or dataloader is "List" or "Dict". https://github.com/pytorch/pytorch/issues/13246 With huggingface datasets, unfortunately, the default return type is the list, so the problem is raised too often if we do not set anything for the issue. However, this issue can be released when the returning output is fixed in length. Therefore, I request the mode, returning outputs with fixed length (e.g. numpy array) rather than list. The design would be good when we load datasets as ```python load_dataset(..., with_return_as_fixed_tensor=True) ``` ### Motivation The general solution for this issue is already in the comments: https://github.com/pytorch/pytorch/issues/13246#issuecomment-905703662 : Numpy or Pandas seems not to have problems, while both have the string type. (I'm not sure that the sequence of huggingface datasets can solve this problem as well) ### Your contribution I'll read it ! thanks
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5715/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5715/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5714
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5714/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5714/comments
https://api.github.com/repos/huggingface/datasets/issues/5714/events
https://github.com/huggingface/datasets/pull/5714
1,657,388,033
PR_kwDODunzps5NxIOc
5,714
Fix xnumpy_load for .npz files
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006498 / 0.011353 (-0.004855) | 0.004406 / 0.011008 (-0.006602) | 0.097136 / 0.038508 (0.058628) | 0.027711 / 0.023109 (0.004601) | 0.303092 / 0.275898 (0.027194) | 0.336804 / 0.323480 (0.013324) | 0.004838 / 0.007986 (-0.003148) | 0.004533 / 0.004328 (0.000204) | 0.075062 / 0.004250 (0.070812) | 0.035105 / 0.037052 (-0.001947) | 0.310245 / 0.258489 (0.051756) | 0.347086 / 0.293841 (0.053245) | 0.030867 / 0.128546 (-0.097679) | 0.011436 / 0.075646 (-0.064211) | 0.320728 / 0.419271 (-0.098544) | 0.042303 / 0.043533 (-0.001230) | 0.308177 / 0.255139 (0.053038) | 0.333673 / 0.283200 (0.050473) | 0.084736 / 0.141683 (-0.056947) | 1.477391 / 1.452155 (0.025237) | 1.530399 / 1.492716 (0.037682) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212698 / 0.018006 (0.194692) | 0.409098 / 0.000490 (0.408608) | 0.004202 / 0.000200 (0.004002) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022725 / 0.037411 (-0.014686) | 0.095866 / 0.014526 (0.081340) | 0.104153 / 0.176557 (-0.072404) | 0.162964 / 0.737135 (-0.574171) | 0.106505 / 0.296338 (-0.189834) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431336 / 0.215209 (0.216127) | 4.283290 / 2.077655 (2.205635) | 1.982418 / 1.504120 (0.478298) | 1.762104 / 1.541195 (0.220909) | 1.807528 / 1.468490 (0.339038) | 0.695507 / 4.584777 (-3.889270) | 3.376299 / 3.745712 (-0.369413) | 1.856642 / 5.269862 (-3.413219) | 1.154258 / 4.565676 (-3.411419) | 0.082749 / 0.424275 (-0.341526) | 0.012289 / 0.007607 (0.004682) | 0.525842 / 0.226044 (0.299798) | 5.285764 / 2.268929 (3.016835) | 2.389926 / 55.444624 (-53.054698) | 2.021830 / 6.876477 (-4.854646) | 2.107460 / 2.142072 (-0.034612) | 0.808118 / 4.805227 (-3.997109) | 0.150791 / 6.500664 (-6.349873) | 0.065825 / 0.075469 (-0.009644) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.206939 / 1.841788 (-0.634849) | 13.795902 / 8.074308 (5.721594) | 14.107950 / 10.191392 (3.916558) | 0.144300 / 0.680424 (-0.536124) | 0.016478 / 0.534201 (-0.517723) | 0.379395 / 0.579283 (-0.199888) | 0.388437 / 0.434364 (-0.045927) | 0.451443 / 0.540337 (-0.088894) | 0.523142 / 1.386936 (-0.863794) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006503 / 0.011353 (-0.004850) | 0.004578 / 0.011008 (-0.006430) | 0.076278 / 0.038508 (0.037770) | 0.028052 / 0.023109 (0.004943) | 0.337873 / 0.275898 (0.061975) | 0.371368 / 0.323480 (0.047888) | 0.005086 / 0.007986 (-0.002899) | 0.003354 / 0.004328 (-0.000975) | 0.076876 / 0.004250 (0.072625) | 0.039146 / 0.037052 (0.002093) | 0.340299 / 0.258489 (0.081810) | 0.381209 / 0.293841 (0.087368) | 0.031771 / 0.128546 (-0.096775) | 0.011670 / 0.075646 (-0.063976) | 0.085156 / 0.419271 (-0.334116) | 0.041990 / 0.043533 (-0.001543) | 0.338644 / 0.255139 (0.083505) | 0.362461 / 0.283200 (0.079262) | 0.089772 / 0.141683 (-0.051911) | 1.480341 / 1.452155 (0.028187) | 1.562815 / 1.492716 (0.070099) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.205700 / 0.018006 (0.187694) | 0.402206 / 0.000490 (0.401716) | 0.001212 / 0.000200 (0.001012) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025172 / 0.037411 (-0.012240) | 0.100959 / 0.014526 (0.086433) | 0.108464 / 0.176557 (-0.068093) | 0.161321 / 0.737135 (-0.575814) | 0.114245 / 0.296338 (-0.182093) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437425 / 0.215209 (0.222216) | 4.362212 / 2.077655 (2.284557) | 2.068815 / 1.504120 (0.564695) | 1.864089 / 1.541195 (0.322894) | 1.909038 / 1.468490 (0.440548) | 0.696097 / 4.584777 (-3.888680) | 3.358628 / 3.745712 (-0.387084) | 2.999085 / 5.269862 (-2.270777) | 1.533917 / 4.565676 (-3.031760) | 0.083010 / 0.424275 (-0.341266) | 0.012372 / 0.007607 (0.004765) | 0.539926 / 0.226044 (0.313882) | 5.438326 / 2.268929 (3.169397) | 2.498581 / 55.444624 (-52.946043) | 2.153359 / 6.876477 (-4.723117) | 2.177891 / 2.142072 (0.035819) | 0.803169 / 4.805227 (-4.002059) | 0.151079 / 6.500664 (-6.349585) | 0.065981 / 0.075469 (-0.009489) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.336682 / 1.841788 (-0.505106) | 14.133055 / 8.074308 (6.058747) | 14.033972 / 10.191392 (3.842580) | 0.152109 / 0.680424 (-0.528315) | 0.016475 / 0.534201 (-0.517726) | 0.387808 / 0.579283 (-0.191475) | 0.378347 / 0.434364 (-0.056017) | 0.484732 / 0.540337 (-0.055606) | 0.569907 / 1.386936 (-0.817029) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1c4ec00511868bd881e84a6f7e0333648d833b8e \"CML watermark\")\n" ]
"2023-04-06T13:01:45"
"2023-04-07T09:23:54"
"2023-04-07T09:16:57"
MEMBER
null
PR: - #5626 implemented support for streaming `.npy` files by using `numpy.load`. However, it introduced a bug when used with `.npz` files, within a context manager: ``` ValueError: seek of closed file ``` or in streaming mode: ``` ValueError: I/O operation on closed file. ``` This PR fixes the bug and tests for both `.npy` and `.npz` files. Fix #5711.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5714/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5714/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5714", "html_url": "https://github.com/huggingface/datasets/pull/5714", "diff_url": "https://github.com/huggingface/datasets/pull/5714.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5714.patch", "merged_at": "2023-04-07T09:16:57" }
true
https://api.github.com/repos/huggingface/datasets/issues/5713
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5713/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5713/comments
https://api.github.com/repos/huggingface/datasets/issues/5713/events
https://github.com/huggingface/datasets/issues/5713
1,657,141,251
I_kwDODunzps5ixfgD
5,713
ArrowNotImplementedError when loading dataset from the hub
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi Julien ! This sounds related to https://github.com/huggingface/datasets/issues/5695 - TL;DR: you need to have shards smaller than 2GB to avoid this issue\r\n\r\nThe number of rows per shard is computed using an estimated size of the full dataset, which can sometimes lead to shards bigger than `max_shard_size`. The estimation is currently done using the first samples of the dataset (which can surely be improved). We should probably open an issue to fix this once and for all.\r\n\r\nAnyway for your specific dataset I'd suggest you to pass `num_shards` instead of `max_shard_size` for now, and make sure to have enough shards to end up with shards smaller than 2GB", "Hi Quentin! Thanks a lot! Using `num_shards` instead of `max_shard_size` works as expected.\r\n\r\nIndeed the way you describe how the size is computed cannot really work with the dataset I'm building as all the image doesn't have the same resolution and then size. Opening an issue on this might be a good idea." ]
"2023-04-06T10:27:22"
"2023-04-06T13:06:22"
"2023-04-06T13:06:21"
CONTRIBUTOR
null
### Describe the bug Hello, I have created a dataset by using the image loader. Once the dataset is created I try to download it and I get the error: ``` Traceback (most recent call last): File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/builder.py", line 1860, in _prepare_split_single for _, table in generator: File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py", line 69, in _generate_tables for batch_idx, record_batch in enumerate( File "pyarrow/_parquet.pyx", line 1323, in iter_batches File "pyarrow/error.pxi", line 121, in pyarrow.lib.check_status pyarrow.lib.ArrowNotImplementedError: Nested data conversions not implemented for chunked array outputs The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/load.py", line 1791, in load_dataset builder_instance.download_and_prepare( File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/builder.py", line 891, in download_and_prepare self._download_and_prepare( File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/builder.py", line 986, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/builder.py", line 1748, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/builder.py", line 1893, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.builder.DatasetGenerationError: An error occurred while generating the dataset ``` ### Steps to reproduce the bug Create the dataset and push it to the hub: ```python from datasets import load_dataset dataset = load_dataset("imagefolder", data_dir="/path/to/dataset") dataset.push_to_hub("org/dataset-name", private=True, max_shard_size="1GB") ``` Then use it: ```python from datasets import load_dataset dataset = load_dataset("org/dataset-name") ``` ### Expected behavior To properly download and use the pushed dataset. Something else to note is that I specified to have shards of 1GB max, but at the end, for the train set, it is an almost 7GB single file that is pushed. ### Environment info - `datasets` version: 2.11.0 - Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 - Python version: 3.10.10 - Huggingface_hub version: 0.13.3 - PyArrow version: 11.0.0 - Pandas version: 2.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5713/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5713/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5712
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5712/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5712/comments
https://api.github.com/repos/huggingface/datasets/issues/5712/events
https://github.com/huggingface/datasets/issues/5712
1,655,972,106
I_kwDODunzps5itCEK
5,712
load_dataset in v2.11.0 raises "ValueError: seek of closed file" in np.load()
{ "login": "rcasero", "id": 1219084, "node_id": "MDQ6VXNlcjEyMTkwODQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1219084?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rcasero", "html_url": "https://github.com/rcasero", "followers_url": "https://api.github.com/users/rcasero/followers", "following_url": "https://api.github.com/users/rcasero/following{/other_user}", "gists_url": "https://api.github.com/users/rcasero/gists{/gist_id}", "starred_url": "https://api.github.com/users/rcasero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rcasero/subscriptions", "organizations_url": "https://api.github.com/users/rcasero/orgs", "repos_url": "https://api.github.com/users/rcasero/repos", "events_url": "https://api.github.com/users/rcasero/events{/privacy}", "received_events_url": "https://api.github.com/users/rcasero/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Closing since this is a duplicate of #5711", "> Closing since this is a duplicate of #5711\r\n\r\nSorry @mariosasko , my internet went down went submitting the issue, and somehow it ended up creating a duplicate" ]
"2023-04-05T16:47:10"
"2023-04-06T08:32:37"
"2023-04-05T17:17:44"
NONE
null
### Describe the bug Hi, I have some `dataset_load()` code of a custom offline dataset that works with datasets v2.10.1. ```python ds = datasets.load_dataset(path=dataset_dir, name=configuration, data_dir=dataset_dir, cache_dir=cache_dir, aux_dir=aux_dir, # download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD, num_proc=18) ``` When upgrading datasets to 2.11.0, it fails with error ``` Traceback (most recent call last): File "<string>", line 2, in <module> File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/load.py", line 1791, in load_dataset builder_instance.download_and_prepare( File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/builder.py", line 891, in download_and_prepare self._download_and_prepare( File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/builder.py", line 1651, in _download_and_prepare super()._download_and_prepare( File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/builder.py", line 964, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/ramon.casero/.cache/huggingface/modules/datasets_modules/datasets/71f67f69e6e00e139903a121f96b71f39b65a6b6aaeb0862e6a5da3a3f565b4c/mydataset.py", line 682, in _split_generators self.some_function() File "/home/ramon.casero/.cache/huggingface/modules/datasets_modules/datasets/71f67f69e6e00e139903a121f96b71f39b65a6b6aaeb0862e6a5da3a3f565b4c/mydataset.py", line 1314, in some_function() x_df = pd.DataFrame({'cell_type_descriptor': fp['x'].tolist()}) File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/numpy/lib/npyio.py", line 248, in __getitem__ bytes = self.zip.open(key) File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/zipfile.py", line 1530, in open fheader = zef_file.read(sizeFileHeader) File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/zipfile.py", line 744, in read self._file.seek(self._pos) ValueError: seek of closed file ``` ### Steps to reproduce the bug Sorry, I cannot share the data or code because they are not mine to share, but the point of failure is a call in `some_function()` ```python with np.load(filename) as fp: x_df = pd.DataFrame({'feature': fp['x'].tolist()}) ``` I'll try to generate a short snippet that reproduces the error. ### Expected behavior I would expect that `load_dataset` works on the custom datasets generation script for v2.11.0 the same way it works for 2.10.1, without making `np.load()` give a `ValueError: seek of closed file` error. ### Environment info - `datasets` version: 2.11.0 - Platform: Linux-4.18.0-483.el8.x86_64-x86_64-with-glibc2.28 - Python version: 3.10.8 - Huggingface_hub version: 0.12.0 - PyArrow version: 11.0.0 - Pandas version: 1.5.2 - numpy: 1.24.2 - This is an offline dataset that uses `datasets.config.HF_DATASETS_OFFLINE = True` in the generation script.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5712/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5712/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5711
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5711/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5711/comments
https://api.github.com/repos/huggingface/datasets/issues/5711/events
https://github.com/huggingface/datasets/issues/5711
1,655,971,647
I_kwDODunzps5itB8_
5,711
load_dataset in v2.11.0 raises "ValueError: seek of closed file" in np.load()
{ "login": "rcasero", "id": 1219084, "node_id": "MDQ6VXNlcjEyMTkwODQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1219084?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rcasero", "html_url": "https://github.com/rcasero", "followers_url": "https://api.github.com/users/rcasero/followers", "following_url": "https://api.github.com/users/rcasero/following{/other_user}", "gists_url": "https://api.github.com/users/rcasero/gists{/gist_id}", "starred_url": "https://api.github.com/users/rcasero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rcasero/subscriptions", "organizations_url": "https://api.github.com/users/rcasero/orgs", "repos_url": "https://api.github.com/users/rcasero/repos", "events_url": "https://api.github.com/users/rcasero/events{/privacy}", "received_events_url": "https://api.github.com/users/rcasero/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "It seems like https://github.com/huggingface/datasets/pull/5626 has introduced this error. \r\n\r\ncc @albertvillanova \r\n\r\nI think replacing:\r\nhttps://github.com/huggingface/datasets/blob/0803a006db1c395ac715662cc6079651f77c11ea/src/datasets/download/streaming_download_manager.py#L777-L778\r\nwith:\r\n```python\r\nreturn np.load(xopen(filepath_or_buffer, \"rb\", use_auth_token=use_auth_token), *args, **kwargs)\r\n```\r\nshould fix the issue.\r\n\r\n(Maybe this is also worth doing a patch release afterward)", "Thanks for reporting, @rcasero.\r\n\r\nI can have a look..." ]
"2023-04-05T16:46:49"
"2023-04-07T09:16:59"
"2023-04-07T09:16:59"
NONE
null
### Describe the bug Hi, I have some `dataset_load()` code of a custom offline dataset that works with datasets v2.10.1. ```python ds = datasets.load_dataset(path=dataset_dir, name=configuration, data_dir=dataset_dir, cache_dir=cache_dir, aux_dir=aux_dir, # download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD, num_proc=18) ``` When upgrading datasets to 2.11.0, it fails with error ``` Traceback (most recent call last): File "<string>", line 2, in <module> File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/load.py", line 1791, in load_dataset builder_instance.download_and_prepare( File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/builder.py", line 891, in download_and_prepare self._download_and_prepare( File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/builder.py", line 1651, in _download_and_prepare super()._download_and_prepare( File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/builder.py", line 964, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/ramon.casero/.cache/huggingface/modules/datasets_modules/datasets/71f67f69e6e00e139903a121f96b71f39b65a6b6aaeb0862e6a5da3a3f565b4c/mydataset.py", line 682, in _split_generators self.some_function() File "/home/ramon.casero/.cache/huggingface/modules/datasets_modules/datasets/71f67f69e6e00e139903a121f96b71f39b65a6b6aaeb0862e6a5da3a3f565b4c/mydataset.py", line 1314, in some_function() x_df = pd.DataFrame({'cell_type_descriptor': fp['x'].tolist()}) File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/numpy/lib/npyio.py", line 248, in __getitem__ bytes = self.zip.open(key) File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/zipfile.py", line 1530, in open fheader = zef_file.read(sizeFileHeader) File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/zipfile.py", line 744, in read self._file.seek(self._pos) ValueError: seek of closed file ``` ### Steps to reproduce the bug Sorry, I cannot share the data or code because they are not mine to share, but the point of failure is a call in `some_function()` ```python with np.load(embedding_filename) as fp: x_df = pd.DataFrame({'feature': fp['x'].tolist()}) ``` I'll try to generate a short snippet that reproduces the error. ### Expected behavior I would expect that `load_dataset` works on the custom datasets generation script for v2.11.0 the same way it works for 2.10.1, without making `np.load()` give a `ValueError: seek of closed file` error. ### Environment info - `datasets` version: 2.11.0 - Platform: Linux-4.18.0-483.el8.x86_64-x86_64-with-glibc2.28 - Python version: 3.10.8 - Huggingface_hub version: 0.12.0 - PyArrow version: 11.0.0 - Pandas version: 1.5.2 - numpy: 1.24.2 - This is an offline dataset that uses `datasets.config.HF_DATASETS_OFFLINE = True` in the generation script.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5711/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5711/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5710
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5710/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5710/comments
https://api.github.com/repos/huggingface/datasets/issues/5710/events
https://github.com/huggingface/datasets/issues/5710
1,655,703,534
I_kwDODunzps5isAfu
5,710
OSError: Memory mapping file failed: Cannot allocate memory
{ "login": "Saibo-creator", "id": 53392976, "node_id": "MDQ6VXNlcjUzMzkyOTc2", "avatar_url": "https://avatars.githubusercontent.com/u/53392976?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Saibo-creator", "html_url": "https://github.com/Saibo-creator", "followers_url": "https://api.github.com/users/Saibo-creator/followers", "following_url": "https://api.github.com/users/Saibo-creator/following{/other_user}", "gists_url": "https://api.github.com/users/Saibo-creator/gists{/gist_id}", "starred_url": "https://api.github.com/users/Saibo-creator/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Saibo-creator/subscriptions", "organizations_url": "https://api.github.com/users/Saibo-creator/orgs", "repos_url": "https://api.github.com/users/Saibo-creator/repos", "events_url": "https://api.github.com/users/Saibo-creator/events{/privacy}", "received_events_url": "https://api.github.com/users/Saibo-creator/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi! This error means that PyArrow's internal [`mmap`](https://man7.org/linux/man-pages/man2/mmap.2.html) call failed to allocate memory, which can be tricky to debug. Since this error is more related to PyArrow than us, I think it's best to report this issue in their [repo](https://github.com/apache/arrow) (they are more experienced on this matter). Also, googling \"mmap cannot allocate memory\" returns some approaches to solving this problem." ]
"2023-04-05T14:11:26"
"2023-04-20T17:16:40"
"2023-04-20T17:16:40"
NONE
null
### Describe the bug Hello, I have a series of datasets each of 5 GB, 600 datasets in total. So together this makes 3TB. When I trying to load all the 600 datasets into memory, I get the above error message. Is this normal because I'm hitting the max size of memory mapping of the OS? Thank you ```terminal 0_21/cache-e9c42499f65b1881.arrow load_hf_datasets_from_disk: 82%|████████████████████████████████████████████████████████████████████████████████████████████████████▍ | 494/600 [07:26<01:35, 1.11it/s] Traceback (most recent call last): File "example_load_genkalm_dataset.py", line 35, in <module> multi_ds.post_process(max_node_num=args.max_node_num,max_seq_length=args.max_seq_length,delay=args.delay) File "/home/geng/GenKaLM/src/dataloader/dataset.py", line 142, in post_process genkalm_dataset = GenKaLM_Dataset.from_hf_dataset(path_or_name=ds_path, max_seq_length=self.max_seq_length, File "/home/geng/GenKaLM/src/dataloader/dataset.py", line 47, in from_hf_dataset hf_ds = load_from_disk(path_or_name) File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/load.py", line 1848, in load_from_disk return Dataset.load_from_disk(dataset_path, keep_in_memory=keep_in_memory, storage_options=storage_options) File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1549, in load_from_disk arrow_table = concat_tables( File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/table.py", line 1805, in concat_tables tables = list(tables) File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1550, in <genexpr> table_cls.from_file(Path(dataset_path, data_file["filename"]).as_posix()) File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/table.py", line 1065, in from_file table = _memory_mapped_arrow_table_from_file(filename) File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/table.py", line 50, in _memory_mapped_arrow_table_from_file memory_mapped_stream = pa.memory_map(filename) File "pyarrow/io.pxi", line 950, in pyarrow.lib.memory_map File "pyarrow/io.pxi", line 911, in pyarrow.lib.MemoryMappedFile._open File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 115, in pyarrow.lib.check_status OSError: Memory mapping file failed: Cannot allocate memory ``` ### Steps to reproduce the bug Sorry I can not provide a reproducible code as the data is stored on my server and it's too large to share. ### Expected behavior I expect the 3TB of data can be fully mapped to memory ### Environment info - `datasets` version: 2.9.0 - Platform: Linux-4.15.0-204-generic-x86_64-with-debian-buster-sid - Python version: 3.7.6 - PyArrow version: 11.0.0 - Pandas version: 1.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5710/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5710/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5709
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5709/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5709/comments
https://api.github.com/repos/huggingface/datasets/issues/5709/events
https://github.com/huggingface/datasets/issues/5709
1,655,423,503
I_kwDODunzps5iq8IP
5,709
Manually dataset info made not taken into account
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "hi @jplu ! Did I understand you correctly that you create the dataset, push it to the Hub with `.push_to_hub` and you see a `dataset_infos.json` file there, then you edit this file, load the dataset with `load_dataset` and you don't see any changes in `.info` attribute of a dataset object? \r\n\r\nThis is actually weird that when you push your dataset to the Hub, a `dataset_infos.json` file is created, because this file is deprecated and it should create `README.md` with the `dataset_info` field instead. Some keys are also deprecated, like \"supervised_keys\" and \"task_templates\".\r\n\r\nCan you please provide a toy reproducible example of how you create and push the dataset? And also why do you want to change this file, especially the number of bytes and examples?", "Hi @polinaeterna Yes I have created the dataset with `Dataset.from_dict` applied some updates afterward and when I pushed to the hub I had a `dataset_infos.json` file and there was a `README.md` file as well.\r\n\r\nI didn't know that the JSON file was deprecated. So I have built my dataset with `ImageBuilder` instead and now it works like a charm without having to touch anything.\r\n\r\nI haven't succeed to reproduce the creation of the JSON file with a toy example, hence, I certainly did some mistakes when I have manipulated my dataset manually at first. My bad." ]
"2023-04-05T11:15:17"
"2023-04-06T08:52:20"
"2023-04-06T08:52:19"
CONTRIBUTOR
null
### Describe the bug Hello, I'm manually building an image dataset with the `from_dict` approach. I also build the features with the `cast_features` methods. Once the dataset is created I push it on the hub, and a default `dataset_infos.json` file seems to have been automatically added to the repo in same time. Hence I update it manually with all the missing info, but when I download the dataset the info are never updated. Former `dataset_infos.json` file: ``` {"default": { "description": "", "citation": "", "homepage": "", "license": "", "features": { "image": { "_type": "Image" }, "labels": { "names": [ "Fake", "Real" ], "_type": "ClassLabel" } }, "splits": { "validation": { "name": "validation", "num_bytes": 901010094.0, "num_examples": 3200, "dataset_name": null }, "train": { "name": "train", "num_bytes": 901010094.0, "num_examples": 3200, "dataset_name": null } }, "download_size": 1802008414, "dataset_size": 1802020188.0, "size_in_bytes": 3604028602.0 }} ``` After I update it manually it looks like: ``` { "bstrai--deepfake-detection":{ "description":"", "citation":"", "homepage":"", "license":"", "features":{ "image":{ "decode":true, "id":null, "_type":"Image" }, "labels":{ "num_classes":2, "names":[ "Fake", "Real" ], "id":null, "_type":"ClassLabel" } }, "supervised_keys":{ "input":"image", "output":"labels" }, "task_templates":[ { "task":"image-classification", "image_column":"image", "label_column":"labels" } ], "config_name":null, "splits":{ "validation":{ "name":"validation", "num_bytes":36627822, "num_examples":123, "dataset_name":"deepfake-detection" }, "train":{ "name":"train", "num_bytes":901023694, "num_examples":3200, "dataset_name":"deepfake-detection" } }, "download_checksums":null, "download_size":937562209, "dataset_size":937651516, "size_in_bytes":1875213725 } } ``` Anything I should do to have the new infos in the `dataset_infos.json` to be taken into account? Or it is not possible yet? Thanks! ### Steps to reproduce the bug - ### Expected behavior - ### Environment info - `datasets` version: 2.11.0 - Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 - Python version: 3.10.10 - Huggingface_hub version: 0.13.3 - PyArrow version: 11.0.0 - Pandas version: 2.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5709/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5709/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5708
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5708/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5708/comments
https://api.github.com/repos/huggingface/datasets/issues/5708/events
https://github.com/huggingface/datasets/issues/5708
1,655,023,642
I_kwDODunzps5ipaga
5,708
Dataset sizes are in MiB instead of MB in dataset cards
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Example of bulk edit: https://huggingface.co/datasets/aeslc/discussions/5", "looks great! \r\n\r\nDo you encode the fact that you've already converted a dataset? (to not convert it twice) or do you base yourself on the info contained in `dataset_info`", "I am only looping trough the dataset cards, assuming that all of them were created with MiB.\r\n\r\nI agree we should only run the bulk edit once for all canonical datasets: I'm using a for-loop over canonical datasets.", "yes, worst case, we have this in structured data:\r\n\r\n<img width=\"337\" alt=\"image\" src=\"https://user-images.githubusercontent.com/326577/230037051-06caddcb-08c8-4953-a710-f3d122917db3.png\">\r\n", "I have just included as well the conversion from MB to GB if necessary. See: \r\n- https://huggingface.co/datasets/bookcorpus/discussions/2/files\r\n- https://huggingface.co/datasets/asnq/discussions/2/files", "Nice. Is it another loop? Because in https://huggingface.co/datasets/amazon_us_reviews/discussions/2/files we have `32377.29 MB` for example", "First, I tested some batches to check the changes made. Then I incorporated the MB to GB conversion. Now I'm running the rest.", "The bulk edit parsed 751 canonical datasets and updated 166.", "Thanks a lot!\r\n\r\nThe sizes now match as expected!\r\n\r\n<img width=\"1446\" alt=\"Capture d’écran 2023-04-05 à 16 10 15\" src=\"https://user-images.githubusercontent.com/1676121/230107044-ac2a76ea-a4fe-4e81-a925-f464b85f5edd.png\">\r\n", "I made another bulk edit of ancient canonical datasets that were moved to community organization. I have parsed 11 datasets and opened a PR on 3 of them:\r\n- [x] \"allenai/scicite\": https://huggingface.co/datasets/allenai/scicite/discussions/3\r\n- [x] \"allenai/scifact\": https://huggingface.co/datasets/allenai/scifact/discussions/2\r\n- [x] \"dair-ai/emotion\": https://huggingface.co/datasets/dair-ai/emotion/discussions/6", "should we force merge the PR and close this issue?", "I merged the PRs for \"scicite\" and \"scifact\"." ]
"2023-04-05T06:36:03"
"2023-12-21T10:20:28"
"2023-12-21T10:20:27"
MEMBER
null
As @severo reported in an internal discussion (https://github.com/huggingface/moon-landing/issues/5929): Now we show the dataset size: - from the dataset card (in the side column) - from the datasets-server (in the viewer) But, even if the size is the same, we see a mismatch because the viewer shows MB, while the info from the README generally shows MiB (even if it's written MB -> https://huggingface.co/datasets/blimp/blob/main/README.md?code=true#L1932) <img width="664" alt="Capture d’écran 2023-04-04 à 10 16 01" src="https://user-images.githubusercontent.com/1676121/229730887-0bd8fa6e-9462-46c6-bd4e-4d2c5784cabb.png"> TODO: Values to be fixed in: `Size of downloaded dataset files:`, `Size of the generated dataset:` and `Total amount of disk used:` - [x] Bulk edit on the Hub to fix this in all canonical datasets - [x] Bulk PR on the Hub to fix ancient canonical datasets that were moved to organizations
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5708/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5708/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5706
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5706/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5706/comments
https://api.github.com/repos/huggingface/datasets/issues/5706/events
https://github.com/huggingface/datasets/issues/5706
1,653,545,835
I_kwDODunzps5ijxtr
5,706
Support categorical data types for Parquet
{ "login": "kklemon", "id": 1430243, "node_id": "MDQ6VXNlcjE0MzAyNDM=", "avatar_url": "https://avatars.githubusercontent.com/u/1430243?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kklemon", "html_url": "https://github.com/kklemon", "followers_url": "https://api.github.com/users/kklemon/followers", "following_url": "https://api.github.com/users/kklemon/following{/other_user}", "gists_url": "https://api.github.com/users/kklemon/gists{/gist_id}", "starred_url": "https://api.github.com/users/kklemon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kklemon/subscriptions", "organizations_url": "https://api.github.com/users/kklemon/orgs", "repos_url": "https://api.github.com/users/kklemon/repos", "events_url": "https://api.github.com/users/kklemon/events{/privacy}", "received_events_url": "https://api.github.com/users/kklemon/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
{ "login": "mhattingpete", "id": 22622299, "node_id": "MDQ6VXNlcjIyNjIyMjk5", "avatar_url": "https://avatars.githubusercontent.com/u/22622299?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mhattingpete", "html_url": "https://github.com/mhattingpete", "followers_url": "https://api.github.com/users/mhattingpete/followers", "following_url": "https://api.github.com/users/mhattingpete/following{/other_user}", "gists_url": "https://api.github.com/users/mhattingpete/gists{/gist_id}", "starred_url": "https://api.github.com/users/mhattingpete/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mhattingpete/subscriptions", "organizations_url": "https://api.github.com/users/mhattingpete/orgs", "repos_url": "https://api.github.com/users/mhattingpete/repos", "events_url": "https://api.github.com/users/mhattingpete/events{/privacy}", "received_events_url": "https://api.github.com/users/mhattingpete/received_events", "type": "User", "site_admin": false }
[ { "login": "mhattingpete", "id": 22622299, "node_id": "MDQ6VXNlcjIyNjIyMjk5", "avatar_url": "https://avatars.githubusercontent.com/u/22622299?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mhattingpete", "html_url": "https://github.com/mhattingpete", "followers_url": "https://api.github.com/users/mhattingpete/followers", "following_url": "https://api.github.com/users/mhattingpete/following{/other_user}", "gists_url": "https://api.github.com/users/mhattingpete/gists{/gist_id}", "starred_url": "https://api.github.com/users/mhattingpete/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mhattingpete/subscriptions", "organizations_url": "https://api.github.com/users/mhattingpete/orgs", "repos_url": "https://api.github.com/users/mhattingpete/repos", "events_url": "https://api.github.com/users/mhattingpete/events{/privacy}", "received_events_url": "https://api.github.com/users/mhattingpete/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi ! We could definitely a type that holds the categories and uses a DictionaryType storage. There's a ClassLabel type that is similar with a 'names' parameter (similar to a id2label in deep learning frameworks) that uses an integer array as storage.\r\n\r\nIt can be added in `features.py`. Here are some pointers:\r\n- the conversion from HF type to PyArrow type is done in `get_nested_type`\r\n- the conversion from Pyarrow type to HF type is done in `generate_from_arrow_type`\r\n- `encode_nested_example` and `decode_nested_example` are used to do user's value (what users see) <-> storage value (what is in the pyarrow.array) if there's any conversion to do", "@kklemon did you implement this? Otherwise I would like to give it a try", "@mhattingpete no, I hadn't time for this so far. Feel free to work on this.", "#self-assign", "This would be super useful, so +1. \r\n\r\nAlso, these prior issues/PRs seem relevant: \r\nhttps://github.com/huggingface/datasets/issues/1906\r\nhttps://github.com/huggingface/datasets/pull/1936", "Hi, this is a really useful feature, has this been implemented yet? ", "Hey folks -- I'm thinking about trying a PR for this. As far as I can tell the only sticky point is that auto-generation of features from a pyarrow schema will fail under the current `generate_from_arrow_type` function because there is no encoding of the categorical string label -> int map in the pa.dictionary type itself; that is stored with the full array. \r\n\r\nI see two ways to solve this. Option 1 is to require datasets with categorical types to use pyarrow schema metadata to encode the entire HF feature dictionary, that way categorical types don't ever need to be inferred from the pa type alone. The downside to this is that it means that these datasets will be a bit brittle, as if the feature encoding API ever changes, they will suddenly be unloadable. \r\n\r\nThe other option is to modify `generate_from_arrow_type` to take per-field metadata, and include just that metadata (the category labels) in the schema metadata. \r\n\r\nDoes anyone at HF have any preference on these two (or alternate) approaches?", "Maybe we don't need to store the string label -> int map in the categorical for the corresponding `datasets` feature ?", "I think that does need to be stored in the Feature object. Similar to how\r\n`ClassLabel` needs the class names for some of the provided default\r\nfunctionality (e.g., encoding or decoding values) here, a categorical\r\nfeature needs the same. Without storing that information, would you suggest\r\nthat categorical features just be stored internally as integer arrays?\r\n\r\nOn Fri, Sep 8, 2023, 5:37 AM Quentin Lhoest ***@***.***>\r\nwrote:\r\n\r\n> Maybe we don't need to store the string label -> int map in the\r\n> categorical for the corresponding datasets feature ?\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/5706#issuecomment-1711375652>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AADS5XZV3RA4GBRVBLJN72LXZLROZANCNFSM6AAAAAAWSOUTJ4>\r\n> .\r\n> You are receiving this because you commented.Message ID:\r\n> ***@***.***>\r\n>\r\n", "Well IIRC you can concatenate two Arrow arrays with different dictionaries together. But for `datasets` would mean updating the `datasets` features when concatenating two arrays of the same type, which is not supported right now. That's why if there is a way to have it without storing the mapping in the feature object it would be nice.\r\n\r\nFor decoding we do have the string<->integer mapping from the array `dictionary` attribute so we're fine. For encoding I think it can work if we only encode when converting python objects to pyarrow in `TypedSequence.__arrow_array__` in `arow_writer.py`. It can work by converting the python objects to a pyarrow array and then use the `dictionary_encode` method.\r\n\r\nAnother concern about concatenation: I noticed **pyarrow creates the new dictionary and indices in memory** when concatenating two dictionary encoded arrays. This can be a problem for big datastets, and we should probably use ChunkedArray objects instead. This can surely be taken care of in `array_concat` in `table.py`\r\n\r\ncc @mariosasko in case you have other ideas\r\n\r\n", "Hmm, that is a good point. What if we implemented this feature first in a\r\nmanner that didn't allow concatenation of arrays with different index to\r\ncategory maps? Then concatenation would be very straightforward, and I\r\nthink this is reasonable if the index to category map is stored in the\r\nschema as well. Obviously, this is limited in how folks could use the\r\nfeature, but they can always fall back to raw strings if needed, and as\r\nusage increases we'll have more data to see what the right solution here\r\nis.\r\n\r\nOn Fri, Sep 8, 2023, 6:49 AM Quentin Lhoest ***@***.***>\r\nwrote:\r\n\r\n> Well IIRC you can concatenate two Arrow arrays with different dictionaries\r\n> together. But for datasets would mean updating the datasets features when\r\n> concatenating two arrays of the same type, which is not supported right\r\n> now. That's why if there is a way to have it without storing the mapping in\r\n> the feature object it would be nice.\r\n>\r\n> For decoding we do have the string<->integer mapping from the array\r\n> dictionary attribute so we're fine. For encoding I think it can work if\r\n> we only encode when converting python objects to pyarrow in\r\n> TypedSequence.__arrow_array__ in arow_writer.py. It can work by\r\n> converting the python objects to a pyarrow array and then use the\r\n> dictionary_encode method.\r\n>\r\n> Another concern about concatenation: I noticed *pyarrow creates the new\r\n> dictionary and indices in memory* when concatenating two dictionary\r\n> encoded arrays. This can be a problem for big datastets, and we should\r\n> probably use ChunkedArray objects instead. This can surely be taken care of\r\n> in array_concat in table.py\r\n>\r\n> cc @mariosasko <https://github.com/mariosasko> in case you have other\r\n> ideas\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/5706#issuecomment-1711468806>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AADS5X4E2KC2IXLDPYR3XZLXZLZ2FANCNFSM6AAAAAAWSOUTJ4>\r\n> .\r\n> You are receiving this because you commented.Message ID:\r\n> ***@***.***>\r\n>\r\n", "@lhoestq @mariosasko just re-pinging on this so I can push forward further here. What are your thoughts on disallowing concatenation of categorical arrays for now such that the index to category map can be stored in the schema metadata? And/or other approaches that should be taken here?\r\n", "I think the easiest for now would be to add a `dictionary_decode` argument to the parquet loaders that would convert the dictionary type back to strings when set to `True`, and make `dictionary_decode=False` raise `NotImplementedError` for now if there are dictionary type columns. Would that be ok as a first step ?", "I mean, that would certainly be easiest but I don't think it really solves this issue in a meaningful way. This just changes the burden from string conversion from the user to HF Datasets, but doesn't actually enable HF Datasets to take advantage of the (very significant) storage and associated speed/memory savings offered by using categorical types. Given that those savings are what is of real interest here, I think keeping it explicit that it is not supported (and forcing the user to do the conversion) might actually be better that way this problem stays top of mind.\r\n\r\nIs there an objection with supporting categorical types explicitly through the medium I outlined above, where the error is raised if you try to concat two differently typed categorical columns?", "> This just changes the burden from string conversion from the user to HF Datasets, but doesn't actually enable HF Datasets to take advantage of the (very significant) storage and associated speed/memory savings offered by using categorical types.\r\n\r\nThere's already a ClassLabel type that does pretty much the same thing (store as integer instead of string) if it can help\r\n\r\n> Is there an objection with supporting categorical types explicitly through the medium I outlined above, where the error is raised if you try to concat two differently typed categorical columns?\r\n\r\nYea we do concatenation quite often (e.g. in `map`) so I don't think this is a viable option", "But how often in the cases where concatenation is done now would the\r\ncategorical label vocabulary actually change? I think it would be in\r\nbasically none of them. And in such cases, concatenation remains very easy,\r\nno?\r\n\r\nOn Fri, Sep 22, 2023, 12:02 PM Quentin Lhoest ***@***.***>\r\nwrote:\r\n\r\n> This just changes the burden from string conversion from the user to HF\r\n> Datasets, but doesn't actually enable HF Datasets to take advantage of the\r\n> (very significant) storage and associated speed/memory savings offered by\r\n> using categorical types.\r\n>\r\n> There's already a ClassLabel type that does pretty much the same thing\r\n> (store as integer instead of string) if it can help\r\n>\r\n> Is there an objection with supporting categorical types explicitly through\r\n> the medium I outlined above, where the error is raised if you try to concat\r\n> two differently typed categorical columns?\r\n>\r\n> Yea we do concatenation quite often (e.g. in map) so I don't think this\r\n> is a viable option\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/5706#issuecomment-1731667012>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AADS5X5CGWFXDCML6UKCWYLX3WZBXANCNFSM6AAAAAAWSOUTJ4>\r\n> .\r\n> You are receiving this because you commented.Message ID:\r\n> ***@***.***>\r\n>\r\n", "Arrow IPC seems to require unified dictionaries anyway so actually we could surely focus only on this use case indeed @mmcdermott \r\n\r\nSo defining a new Feature type in `datasets` that contains the dictionary mapping should be fine (and concatenation would work out of the box), and it should also take care of checking that the data it encodes/decodes has the right dictionary. Do you think it can be done without impacting iterating speed for the other types @mariosasko ?\r\n\r\nRight now we have little bandwidth to work in this kind of things though" ]
"2023-04-04T09:45:35"
"2023-09-22T16:53:37"
null
NONE
null
### Feature request Huggingface datasets does not seem to support categorical / dictionary data types for Parquet as of now. There seems to be a `TODO` in the code for this feature but no implementation yet. Below you can find sample code to reproduce the error that is currently thrown when attempting to read a Parquet file with categorical columns: ```python import pandas as pd import pyarrow.parquet as pq from datasets import load_dataset # Create categorical sample DataFrame df = pd.DataFrame({'type': ['foo', 'bar']}).astype('category') df.to_parquet('data.parquet') # Read back as pyarrow table table = pq.read_table('data.parquet') print(table.schema) # type: dictionary<values=string, indices=int32, ordered=0> # Load with huggingface datasets load_dataset('parquet', data_files='data.parquet') ``` Error: ``` Traceback (most recent call last): File ".venv/lib/python3.10/site-packages/datasets/builder.py", line 1875, in _prepare_split_single writer.write_table(table) File ".venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 566, in write_table self._build_writer(inferred_schema=pa_table.schema) File ".venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 379, in _build_writer inferred_features = Features.from_arrow_schema(inferred_schema) File ".venv/lib/python3.10/site-packages/datasets/features/features.py", line 1622, in from_arrow_schema obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema} File ".venv/lib/python3.10/site-packages/datasets/features/features.py", line 1622, in <dictcomp> obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema} File ".venv/lib/python3.10/site-packages/datasets/features/features.py", line 1361, in generate_from_arrow_type raise NotImplementedError # TODO(thom) this will need access to the dictionary as well (for labels). I.e. to the py_table NotImplementedError ``` ### Motivation Categorical data types, as offered by Pandas and implemented with the `DictionaryType` dtype in `pyarrow` can significantly reduce dataset size and are a handy way to turn textual features into numerical representations and back. Lack of support in Huggingface datasets greatly reduces compatibility with a common Pandas / Parquet feature. ### Your contribution I could provide a PR. However, it would be nice to have an initial complexity estimate from one of the core developers first.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5706/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5706/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5705
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5705/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5705/comments
https://api.github.com/repos/huggingface/datasets/issues/5705/events
https://github.com/huggingface/datasets/issues/5705
1,653,500,383
I_kwDODunzps5ijmnf
5,705
Getting next item from IterableDataset took forever.
{ "login": "HongtaoYang", "id": 16588434, "node_id": "MDQ6VXNlcjE2NTg4NDM0", "avatar_url": "https://avatars.githubusercontent.com/u/16588434?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HongtaoYang", "html_url": "https://github.com/HongtaoYang", "followers_url": "https://api.github.com/users/HongtaoYang/followers", "following_url": "https://api.github.com/users/HongtaoYang/following{/other_user}", "gists_url": "https://api.github.com/users/HongtaoYang/gists{/gist_id}", "starred_url": "https://api.github.com/users/HongtaoYang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HongtaoYang/subscriptions", "organizations_url": "https://api.github.com/users/HongtaoYang/orgs", "repos_url": "https://api.github.com/users/HongtaoYang/repos", "events_url": "https://api.github.com/users/HongtaoYang/events{/privacy}", "received_events_url": "https://api.github.com/users/HongtaoYang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi! It can take some time to iterate over Parquet files as big as yours, convert the samples to Python, and find the first one that matches a filter predicate before yielding it...", "Thanks @mariosasko, I figured it was the filter operation. I'm closing this issue because it is not a bug, it is the expected beheaviour." ]
"2023-04-04T09:16:17"
"2023-04-05T23:35:41"
"2023-04-05T23:35:41"
NONE
null
### Describe the bug I have a large dataset, about 500GB. The format of the dataset is parquet. I then load the dataset and try to get the first item ```python def get_one_item(): dataset = load_dataset("path/to/datafiles", split="train", cache_dir=".", streaming=True) dataset = dataset.filter(lambda example: example['text'].startswith('Ar')) print(next(iter(dataset))) ``` However, this function never finish. I waited ~10mins, the function was still running so I killed the process. I'm now using `line_profiler` to profile how long it would take to return one item. I'll be patient and wait for as long as it needs. I suspect the filter operation is the reason why it took so long. Can I get some possible reasons behind this? ### Steps to reproduce the bug Unfortunately without my data files, there is no way to reproduce this bug. ### Expected behavior With `IteralbeDataset`, I expect the first item to be returned instantly. ### Environment info - datasets version: 2.11.0 - python: 3.7.12
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5705/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5705/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5704
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5704/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5704/comments
https://api.github.com/repos/huggingface/datasets/issues/5704/events
https://github.com/huggingface/datasets/pull/5704
1,653,471,356
PR_kwDODunzps5NkEvJ
5,704
5537 speedup load
{ "login": "semajyllek", "id": 35013374, "node_id": "MDQ6VXNlcjM1MDEzMzc0", "avatar_url": "https://avatars.githubusercontent.com/u/35013374?v=4", "gravatar_id": "", "url": "https://api.github.com/users/semajyllek", "html_url": "https://github.com/semajyllek", "followers_url": "https://api.github.com/users/semajyllek/followers", "following_url": "https://api.github.com/users/semajyllek/following{/other_user}", "gists_url": "https://api.github.com/users/semajyllek/gists{/gist_id}", "starred_url": "https://api.github.com/users/semajyllek/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/semajyllek/subscriptions", "organizations_url": "https://api.github.com/users/semajyllek/orgs", "repos_url": "https://api.github.com/users/semajyllek/repos", "events_url": "https://api.github.com/users/semajyllek/events{/privacy}", "received_events_url": "https://api.github.com/users/semajyllek/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Awesome ! cc @mariosasko :)", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5704). All of your documentation changes will be reflected on that endpoint.", "Hi, thanks for working on this!\r\n\r\nYour solution only works if the `root` is `\"\"`, e.g., this would yield an incorrect result:\r\n```python\r\ndset = load_dataset(\"user/hf-dataset-repo\", data_dir=\"path/to/data_dir\")\r\n```\r\n\r\nAlso, the `HfFileSystem` implementation in `datasets` will be replaced with the more powerful [one](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_file_system.py) from `huggingface_hub` soon (I plan to open a PR that makes `find` much faster in the coming days). \r\n\r\nSo I don't think we want to merge this PR in the current state, but thanks again for the effort.\r\n\r\n (I'll comment on the original issue to propose a cleaner solution)", "Ooof. Sorry, I should have checked that more thoroughly then! I would say we could just add that check and only use my approach if the root is \"\", which would still be faster in many cases, but it sounds like you have a better solution on the way. Thanks for the feedback Mario." ]
"2023-04-04T08:58:14"
"2023-04-07T16:10:55"
null
NONE
null
I reimplemented fsspec.spec.glob() in `hffilesystem.py` as `_glob`, used it in `_resolve_single_pattern_in_dataset_repository` only, and saw a 20% speedup in times to load the config, on average. That's not much when usually this step takes only 2-3 seconds for most datasets, but in this particular case, `bigcode/the-stack-dedup` , the loading time to get the config (not download the entire 6tb dataset, of course), went from ~170 secs to ~20 secs. What makes this work is this code in `_glob`: ``` if self.dir_cache is not None: allpaths = self.dir_cache else: allpaths = self.find(root, maxdepth=depth, withdirs=True, detail=True, **kwargs) ``` I also had to `import glob.has_magic( )` for `_glob()` (confusing, I know). I hope there is no issue with copying most of the code from `fsspec.spec.glob`, as it is a BSD 3-Clause License, and I left a comment about this in the docstring of` _glob()`, that we may want to delete. As mentioned, I evaluated the speedup across a random selection of about 1000 datasets (not all 27k+), and verified that old_config.eq(new_method_config) with the build in method, but deleted this test and related code changes on the subsequent commit. It's in the commit history if anyone wants to see it. (Note this does not include the outlier of `bigcode/the-stack-dedup` | | old_time | new _time | diff | pct_diff | | -- | -- | -- | -- | -- | | mean | 3.340 | 2.642 | 0.698 | 18.404 | | min | 2.024 | 1.976 | -0.840 | -37.634 | | max | 66.582 | 41.517 | 30.927 | 85.538 |
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5704/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5704/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5704", "html_url": "https://github.com/huggingface/datasets/pull/5704", "diff_url": "https://github.com/huggingface/datasets/pull/5704.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5704.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/5703
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5703/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5703/comments
https://api.github.com/repos/huggingface/datasets/issues/5703/events
https://github.com/huggingface/datasets/pull/5703
1,653,158,955
PR_kwDODunzps5NjCCV
5,703
[WIP][Test, Please ignore] Investigate performance impact of using multiprocessing only
{ "login": "hvaara", "id": 1535968, "node_id": "MDQ6VXNlcjE1MzU5Njg=", "avatar_url": "https://avatars.githubusercontent.com/u/1535968?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hvaara", "html_url": "https://github.com/hvaara", "followers_url": "https://api.github.com/users/hvaara/followers", "following_url": "https://api.github.com/users/hvaara/following{/other_user}", "gists_url": "https://api.github.com/users/hvaara/gists{/gist_id}", "starred_url": "https://api.github.com/users/hvaara/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hvaara/subscriptions", "organizations_url": "https://api.github.com/users/hvaara/orgs", "repos_url": "https://api.github.com/users/hvaara/repos", "events_url": "https://api.github.com/users/hvaara/events{/privacy}", "received_events_url": "https://api.github.com/users/hvaara/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "`multiprocess` uses `dill` instead of `pickle` for pickling shared objects and, as such, can pickle more types than `multiprocessing`. And I don't think this is something we want to change :).", "That makes sense to me, and I don't think you should merge this change. I was only curious about the performance impact. I saw the benchmarks that was produced in other PRs, and wanted to get a better understanding of it. I created this PR to see if it got automatically added here.\r\n\r\nIs there a way I can generate those benchmarks myself?", "You can find some speed comparisons between dill and pickle on SO if you google \"dill vs pickle speed\".\r\n\r\nAnd for the benchmarks, you can generate them locally with DVC running this code from the repo root: https://github.com/huggingface/datasets/blob/0803a006db1c395ac715662cc6079651f77c11ea/.github/workflows/benchmarks.yaml#L23-L47.", "Thanks for the help @mariosasko!" ]
"2023-04-04T04:37:49"
"2023-04-20T03:17:37"
"2023-04-20T03:17:32"
NONE
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5703/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5703/timeline
null
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5703", "html_url": "https://github.com/huggingface/datasets/pull/5703", "diff_url": "https://github.com/huggingface/datasets/pull/5703.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5703.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/5702
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5702/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5702/comments
https://api.github.com/repos/huggingface/datasets/issues/5702/events
https://github.com/huggingface/datasets/issues/5702
1,653,104,720
I_kwDODunzps5iiGBQ
5,702
Is it possible or how to define a `datasets.Sequence` that could potentially be either a dict, a str, or None?
{ "login": "gitforziio", "id": 10508116, "node_id": "MDQ6VXNlcjEwNTA4MTE2", "avatar_url": "https://avatars.githubusercontent.com/u/10508116?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gitforziio", "html_url": "https://github.com/gitforziio", "followers_url": "https://api.github.com/users/gitforziio/followers", "following_url": "https://api.github.com/users/gitforziio/following{/other_user}", "gists_url": "https://api.github.com/users/gitforziio/gists{/gist_id}", "starred_url": "https://api.github.com/users/gitforziio/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gitforziio/subscriptions", "organizations_url": "https://api.github.com/users/gitforziio/orgs", "repos_url": "https://api.github.com/users/gitforziio/repos", "events_url": "https://api.github.com/users/gitforziio/events{/privacy}", "received_events_url": "https://api.github.com/users/gitforziio/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "Hi ! `datasets` uses Apache Arrow as backend to store the data, and it requires each column to have a fixed type. Therefore a column can't have a mix of dicts/lists/strings.\r\n\r\nThough it's possible to have one (nullable) field for each type:\r\n```python\r\nfeatures = Features({\r\n \"text_alone\": Value(\"string\"),\r\n \"text_with_idxes\": {\r\n \"text\": Value(\"string\"),\r\n \"idxes\": Value(\"int64\")\r\n }\r\n})\r\n```\r\n\r\nbut you'd have to reformat your data fiels or define a [dataset loading script](https://huggingface.co/docs/datasets/dataset_script) to apply the appropriate parsing.\r\n\r\nAlternatively we could explore supporting the Arrow [Union](https://arrow.apache.org/docs/python/generated/pyarrow.UnionType.html) type which could solve this issue, but I don't know if it's well supported in python and with the rest of the ecosystem like Parquet", "@lhoestq Thank you! I further wonder if it's possible to use list subscripts as keys of a feature? Like\r\n```python\r\nfeatures = Features({\r\n 0: Value(\"string\"),\r\n 1: {\r\n \"text\": Value(\"string\"),\r\n \"idxes\": [Value(\"int64\")]\r\n },\r\n 2: Value(\"string\"),\r\n # ...\r\n})\r\n```", "Column names need to be strings, so you could use \"1\", \"2\", etc. or give appropriate column names", "@lhoestq Got it. Thank you!" ]
"2023-04-04T03:20:43"
"2023-04-05T14:15:18"
"2023-04-05T14:15:17"
NONE
null
### Feature request Hello! Apologies if my question sounds naive: I was wondering if it’s possible, or how one would go about defining a 'datasets.Sequence' element in datasets.Features that could potentially be either a dict, a str, or None? Specifically, I’d like to define a feature for a list that contains 18 elements, each of which has been pre-defined as either a `dict or None` or `str or None` - as demonstrated in the slightly misaligned data provided below: ```json [ [ {"text":"老妇人","idxes":[0,1,2]},null,{"text":"跪","idxes":[3]},null,null,null,null,{"text":"在那坑里","idxes":[4,5,6,7]},null,null,null,null,null,null,null,null,null,null], [ {"text":"那些水","idxes":[13,14,15]},null,{"text":"舀","idxes":[11]},null,null,null,null,null,{"text":"在那坑里","idxes":[4,5,6,7]},null,{"text":"出","idxes":[12]},null,null,null,null,null,null,null], [ {"text":"水","idxes":[38]}, null, {"text":"舀","idxes":[40]}, "假", // note this is just a standalone string null,null,null,{"text":"坑里","idxes":[35,36]},null,null,null,null,null,null,null,null,null,null]] ``` ### Motivation I'm currently working with a dataset of the following structure and I couldn't find a solution in the [documentation](https://huggingface.co/docs/datasets/v2.11.0/en/package_reference/main_classes#datasets.Features). ```json {"qid":"3-train-1058","context":"桑桑害怕了。从玉米地里走到田埂上,他遥望着他家那幢草房子里的灯光,知道母亲没有让他回家的意思,很伤感,有点想哭。但没哭,转身朝阿恕家走去。","corefs":[[{"text":"桑桑","idxes":[0,1]},{"text":"他","idxes":[17]}]],"non_corefs":[],"outputs":[[{"text":"他","idxes":[17]},null,{"text":"走","idxes":[11]},null,null,null,null,null,{"text":"从玉米地里","idxes":[6,7,8,9,10]},{"text":"到田埂上","idxes":[12,13,14,15]},null,null,null,null,null,null,null,null],[{"text":"他","idxes":[17]},null,{"text":"走","idxes":[66]},null,null,null,null,null,null,null,{"text":"转身朝阿恕家去","idxes":[60,61,62,63,64,65,67]},null,null,null,null,null,null,null],[{"text":"灯光","idxes":[30,31]},null,null,null,null,null,null,{"text":"草房子里","idxes":[25,26,27,28]},null,null,null,null,null,null,null,null,null,null],[{"text":"他","idxes":[17]},{"text":"他家那幢草房子","idxes":[21,22,23,24,25,26,27]},null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,"远"],[{"text":"他","idxes":[17]},{"text":"阿恕家","idxes":[63,64,65]},null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,"变近"]]} ``` ### Your contribution I'm going to provide the dataset at https://huggingface.co/datasets/2030NLP/SpaCE2022 .
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5702/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5702/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5701
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5701/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5701/comments
https://api.github.com/repos/huggingface/datasets/issues/5701/events
https://github.com/huggingface/datasets/pull/5701
1,652,931,399
PR_kwDODunzps5NiSCy
5,701
Add Dataset.from_spark
{ "login": "maddiedawson", "id": 106995444, "node_id": "U_kgDOBmCe9A", "avatar_url": "https://avatars.githubusercontent.com/u/106995444?v=4", "gravatar_id": "", "url": "https://api.github.com/users/maddiedawson", "html_url": "https://github.com/maddiedawson", "followers_url": "https://api.github.com/users/maddiedawson/followers", "following_url": "https://api.github.com/users/maddiedawson/following{/other_user}", "gists_url": "https://api.github.com/users/maddiedawson/gists{/gist_id}", "starred_url": "https://api.github.com/users/maddiedawson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/maddiedawson/subscriptions", "organizations_url": "https://api.github.com/users/maddiedawson/orgs", "repos_url": "https://api.github.com/users/maddiedawson/repos", "events_url": "https://api.github.com/users/maddiedawson/events{/privacy}", "received_events_url": "https://api.github.com/users/maddiedawson/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "@mariosasko Would you or another HF datasets maintainer be able to review this, please?", "Amazing ! Great job @maddiedawson \r\n\r\nDo you know if it's possible to also support writing to Parquet using the HF ParquetWriter if `file_format=\"parquet\"` ?\r\n\r\nParquet is often used when people want to stream the data to train models - which is suitable for big datasets. On the other hand Arrow is generally used for local memory mapping with random access.\r\n\r\n> Please note there was a previous PR adding this functionality\r\n\r\nAm I right to say that it uses the spark workers to prepare the Arrow files ? If so this should make the data preparation fast and won't fill up the executor's memory as in the previously proposed PR", "Thanks for taking a look! Unlike the previous PR's approach, this implementation takes advantage of Spark mapping to distribute file writing over multiple tasks. (Also it doesn't load the entire dataset into memory :) )\r\n\r\nSupporting Parquet here sgtm; I'll modify the PR.\r\n\r\nI also updated the PR description with a common Spark-HF use case that we want to improve.", "Hey @albertvillanova @lhoestq , would one of you be able to re-review please? Thank you!", "@lhoestq this is ready for another pass! Thanks so much 🙏 ", "Friendly ping @lhoestq , also cc @polinaeterna who may be able to help take a look?", "Merging `main` into this branch should fix the CI", "Just rebased @lhoestq ", "Thanks @lhoestq ! Is there a way for me to trigger the github workflow myself to triage the test failure? I'm not able to repro the test failures locally.", "There were two test issues in the workflow that I wasn't able to reproduce locally:\r\n\r\n- Python 3.7: createDataFrame fails due to a pickling error. I modified the tests to instead write and read from json files\r\n- Python 3.10: A worker crashes for unknown reasons. I modified the spark setup to explicitly specify local mode in case it was trying to do something else; let's see if that fixes the issue", "Also one more question @lhoestq when is the next datasets release? We're hoping this can make it in", "I just re-ran the CI.\r\nI think we can do a release right after this PR is merged ;)", "Thanks all! @lhoestq could we re-run CI again please? I think we have to disable this feature on python 3.7 due to the pickling error. The other failure was due to https://issues.apache.org/jira/browse/SPARK-30952 so I rewrote the df processing", "Thanks @lhoestq , this is ready for another CI run. I pinned the pyspark version to see if that fixes the pickling issue", "The remaining CI issues have been addressed! They were\r\n\r\n- dill=0.3.1.1 is incompatible with cloudpickle, used by Spark. The min-dependency tests use this dill version, and those were failing. I added a skip-test annotation to skip Spark tests when using this dill version. This shouldn't be a production issue since if users are using that version of dill, they won't really be able to do anything with Spark anyway.\r\n- One of the Spark APIs used in this feature (mapInArrow) is incompatible with Windows. I filed a Spark ticket for the team to investigate. For the tests, I added another annotation to skip Spark tests on Windows. In the next PR (adding streaming mode), we should be able to support Windows since that won't use mapInArrow.\r\n\r\nI ran the CI on my forked branch: https://github.com/maddiedawson/datasets/pull/2 Everything passes except one instance of tests/test_metric_common.py::LocalMetricTest::test_load_metric_frugalscore; it looks like a flake.\r\n\r\n@lhoestq granted that the CI passes here, is this ok to merge and release? We'd like to put out a blog post tomorrow to broadcast this to Spark users!", "Thanks @lhoestq ! Could you help take a look at the error please? Seems unrelated...\r\n\r\nFAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_map_multiprocessing_on_disk - NotADirectoryError: [WinError 267] The directory name is invalid: 'C:\\\\Users\\\\RUNNER~1\\\\AppData\\\\Local\\\\Temp\\\\tmptfnrdj4x\\\\cache-5c5687cf5629c97a_00000_of_00002.arrow'\r\n===== 1 failed, 2152 passed, 23 skipped, 20 warnings in 461.68s (0:07:41) =====", "The blog is live btw! https://www.databricks.com/blog/contributing-spark-loader-for-hugging-face-datasets Hopefully there can be a release today?", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012686 / 0.011353 (0.001333) | 0.006051 / 0.011008 (-0.004957) | 0.123057 / 0.038508 (0.084549) | 0.033238 / 0.023109 (0.010128) | 0.388207 / 0.275898 (0.112309) | 0.393972 / 0.323480 (0.070492) | 0.006645 / 0.007986 (-0.001340) | 0.006715 / 0.004328 (0.002386) | 0.098348 / 0.004250 (0.094097) | 0.041410 / 0.037052 (0.004358) | 0.380123 / 0.258489 (0.121634) | 0.427982 / 0.293841 (0.134141) | 0.052194 / 0.128546 (-0.076352) | 0.018775 / 0.075646 (-0.056871) | 0.399063 / 0.419271 (-0.020209) | 0.061019 / 0.043533 (0.017487) | 0.370943 / 0.255139 (0.115804) | 0.398326 / 0.283200 (0.115127) | 0.136893 / 0.141683 (-0.004790) | 1.777431 / 1.452155 (0.325276) | 1.844354 / 1.492716 (0.351638) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.267296 / 0.018006 (0.249289) | 0.565133 / 0.000490 (0.564643) | 0.005811 / 0.000200 (0.005611) | 0.000122 / 0.000054 (0.000068) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027009 / 0.037411 (-0.010402) | 0.125907 / 0.014526 (0.111381) | 0.122111 / 0.176557 (-0.054445) | 0.189023 / 0.737135 (-0.548112) | 0.140510 / 0.296338 (-0.155829) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.589269 / 0.215209 (0.374060) | 6.038038 / 2.077655 (3.960384) | 2.394681 / 1.504120 (0.890561) | 2.099268 / 1.541195 (0.558073) | 2.105146 / 1.468490 (0.636656) | 1.216304 / 4.584777 (-3.368473) | 5.823110 / 3.745712 (2.077397) | 4.999323 / 5.269862 (-0.270539) | 2.781554 / 4.565676 (-1.784122) | 0.148370 / 0.424275 (-0.275905) | 0.015163 / 0.007607 (0.007556) | 0.775153 / 0.226044 (0.549109) | 7.425314 / 2.268929 (5.156385) | 3.320254 / 55.444624 (-52.124370) | 2.718595 / 6.876477 (-4.157881) | 2.696215 / 2.142072 (0.554142) | 1.452249 / 4.805227 (-3.352978) | 0.281355 / 6.500664 (-6.219309) | 0.088146 / 0.075469 (0.012677) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.495718 / 1.841788 (-0.346070) | 17.498714 / 8.074308 (9.424405) | 20.109705 / 10.191392 (9.918313) | 0.233053 / 0.680424 (-0.447371) | 0.028336 / 0.534201 (-0.505865) | 0.538146 / 0.579283 (-0.041137) | 0.642106 / 0.434364 (0.207742) | 0.597214 / 0.540337 (0.056876) | 0.732219 / 1.386936 (-0.654717) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008153 / 0.011353 (-0.003200) | 0.005605 / 0.011008 (-0.005403) | 0.096159 / 0.038508 (0.057651) | 0.034102 / 0.023109 (0.010992) | 0.428091 / 0.275898 (0.152193) | 0.476535 / 0.323480 (0.153056) | 0.006278 / 0.007986 (-0.001708) | 0.006752 / 0.004328 (0.002424) | 0.100553 / 0.004250 (0.096302) | 0.045546 / 0.037052 (0.008494) | 0.463236 / 0.258489 (0.204747) | 0.502512 / 0.293841 (0.208671) | 0.051014 / 0.128546 (-0.077533) | 0.018499 / 0.075646 (-0.057148) | 0.127587 / 0.419271 (-0.291685) | 0.059254 / 0.043533 (0.015722) | 0.432248 / 0.255139 (0.177109) | 0.462002 / 0.283200 (0.178802) | 0.124918 / 0.141683 (-0.016765) | 1.689740 / 1.452155 (0.237585) | 1.871546 / 1.492716 (0.378830) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.274844 / 0.018006 (0.256838) | 0.570522 / 0.000490 (0.570032) | 0.004008 / 0.000200 (0.003808) | 0.000146 / 0.000054 (0.000091) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025323 / 0.037411 (-0.012088) | 0.116323 / 0.014526 (0.101797) | 0.129434 / 0.176557 (-0.047122) | 0.187069 / 0.737135 (-0.550067) | 0.134459 / 0.296338 (-0.161880) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.633551 / 0.215209 (0.418341) | 6.290078 / 2.077655 (4.212423) | 2.692071 / 1.504120 (1.187951) | 2.354344 / 1.541195 (0.813149) | 2.409260 / 1.468490 (0.940770) | 1.270515 / 4.584777 (-3.314261) | 5.552982 / 3.745712 (1.807270) | 3.041417 / 5.269862 (-2.228444) | 1.920634 / 4.565676 (-2.645043) | 0.142500 / 0.424275 (-0.281775) | 0.014378 / 0.007607 (0.006770) | 0.786444 / 0.226044 (0.560399) | 7.711558 / 2.268929 (5.442630) | 3.439688 / 55.444624 (-52.004936) | 2.742314 / 6.876477 (-4.134163) | 2.800531 / 2.142072 (0.658458) | 1.405843 / 4.805227 (-3.399385) | 0.245322 / 6.500664 (-6.255342) | 0.076662 / 0.075469 (0.001193) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.592961 / 1.841788 (-0.248827) | 18.165647 / 8.074308 (10.091339) | 20.011433 / 10.191392 (9.820041) | 0.240558 / 0.680424 (-0.439866) | 0.026045 / 0.534201 (-0.508156) | 0.529610 / 0.579283 (-0.049674) | 0.652494 / 0.434364 (0.218130) | 0.612284 / 0.540337 (0.071947) | 0.733180 / 1.386936 (-0.653756) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ea251c726c73bd076a1bef7e39e2ac4e97c8d166 \"CML watermark\")\n", "python 3.9.2\r\nGot an error _pickle.PicklingError use Dataset.from_spark.\r\n\r\nDid the dataset import load data from spark dataframe using multi-node Spark cluster\r\ndf = spark.read.parquet(args.input_data).repartition(50)\r\nds = Dataset.from_spark(df, keep_in_memory=True,\r\n cache_dir=\"/pnc-data/data/nuplan/t5_spark/cache_data\")\r\nds.save_to_disk(args.output_data)\r\n\r\nError : \r\n_pickle.PicklingError: Could not serialize object: RuntimeError: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transforma\r\ntion. SparkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063.\r\n23/06/16 21:17:20 WARN ExecutorPodsWatchSnapshotSource: Kubernetes client has been closed (this is expected if the application is shutting down.)\r\n", "Hi @yanzia12138 ! Could you open a new issue please and share the full stack trace ? This will help to know what happened exactly" ]
"2023-04-03T23:51:29"
"2023-06-16T16:39:32"
"2023-04-26T15:43:39"
CONTRIBUTOR
null
Adds static method Dataset.from_spark to create datasets from Spark DataFrames. This approach alleviates users of the need to materialize their dataframe---a common use case is that the user loads their dataset into a dataframe, uses Spark to apply some transformation to some of the columns, and then wants to train on the dataset. Related issue: https://github.com/huggingface/datasets/issues/5678
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5701/reactions", "total_count": 6, "+1": 0, "-1": 0, "laugh": 0, "hooray": 4, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5701/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5701", "html_url": "https://github.com/huggingface/datasets/pull/5701", "diff_url": "https://github.com/huggingface/datasets/pull/5701.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5701.patch", "merged_at": "2023-04-26T15:43:39" }
true
https://api.github.com/repos/huggingface/datasets/issues/5700
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5700/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5700/comments
https://api.github.com/repos/huggingface/datasets/issues/5700/events
https://github.com/huggingface/datasets/pull/5700
1,652,527,530
PR_kwDODunzps5Ng6g_
5,700
fix: fix wrong modification of the 'cache_file_name' -related paramet…
{ "login": "FrancoisNoyez", "id": 47528215, "node_id": "MDQ6VXNlcjQ3NTI4MjE1", "avatar_url": "https://avatars.githubusercontent.com/u/47528215?v=4", "gravatar_id": "", "url": "https://api.github.com/users/FrancoisNoyez", "html_url": "https://github.com/FrancoisNoyez", "followers_url": "https://api.github.com/users/FrancoisNoyez/followers", "following_url": "https://api.github.com/users/FrancoisNoyez/following{/other_user}", "gists_url": "https://api.github.com/users/FrancoisNoyez/gists{/gist_id}", "starred_url": "https://api.github.com/users/FrancoisNoyez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FrancoisNoyez/subscriptions", "organizations_url": "https://api.github.com/users/FrancoisNoyez/orgs", "repos_url": "https://api.github.com/users/FrancoisNoyez/repos", "events_url": "https://api.github.com/users/FrancoisNoyez/events{/privacy}", "received_events_url": "https://api.github.com/users/FrancoisNoyez/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Have you tried to set the cache file names if `keep_in_memory`is True ?\r\n\r\n```diff\r\n- if self.cache_files:\r\n+ if self.cache_files and not keep_in_memory:\r\n```\r\n\r\nThis way it doesn't change the indice cache arguments and leave them as `None`", "@lhoestq \r\nRegarding what you suggest:\r\nThe thing is, if cached files already exist and do correspond to the split that we are currently trying to perform, then it would be a shame not to use them, would it not? So I don't think that we should necessarily bypass this step in the method (corresponding to the reading of already existing data), if 'keep_in_memory' = True. For me, 'keep_in_memory' = True is supposed to mean \"don't cache the output of this method\", but it should say nothing regarding what to do with potentially already existing cached data, should it?\r\nBesides, even if we do what you suggest, and do only that (so, not the modifs that I suggested), then, assuming that 'keep_in_memory' = False and that there exist cached files, if the following check on the existence of cached files with specific name fails, we will still have ended up modifying an input value which will be then used in the remaining of the method, potentially altering the behavior that the user intended the method's call to have. Basically, the issue with what you suggest is that we can't guaranty that we won't continue with the remaining of the method even if this condition is met. Because of that, in my opinion, the best way to not have to worry about potential, unwanted side effects in the rest of the code is to not modify those variables in place, and so, here, to use other variables.\r\nSo, I'm sorry, but for those two reasons, I don't think that what you are suggesting addresses the problems which are described in the opened issue.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5700). All of your documentation changes will be reflected on that endpoint.", "Makes sense ! Therefore removing the ValueError messages sounds good to me, thanks for detailing.\r\n\r\nThen I think it's fine to keep using the same variables for the cache file names is enough instead of defining new ones - it doesn't alter the behavior of the function. Otherwise it would feel a bit confusing to have similar variables with slightly modified names just for that", "Ok for the removing the ValueError exceptions, thanks.\r\n\r\nThat said, it seems to me like we should still find a way not to modify the values input by the user, insofar as they can be used elsewhere down the line in the program. Sure, here, by removing the raising of those ValueError exceptions, we have fixed one use cases were allowing this modification actually caused an issue, but maybe there are other use cases where this would also caused an issue? Also, maybe in the future we will add other functionalities which will depend on the values of those input parameters, with then new risks of such an issue occurring?\r\nThat's why, in order not to have to worry about that, and in order to make the code a bit more future -proof, I suggest that make sure those input values are not modified.\r\n\r\nOne way that I did this is to create different but similar looking variable names. If you find this confusing, we can always add a comment.\r\nAnother way would be to not store the result of the conditional definition of the values (the '\\_cache_file_name = (... if condition else ...)' in my proposition of code), and to use it every time we need. But since we use those new variables at least twice, that creates code redundancy, which is not great either.\r\nFinally, a third way that I can imagine would be to put all this logic into its own method, which would then encapsulate it, and protect the remaining of the 'train_test_split' code from all unintended side effect that this logic can currently cause. This one is probably best. Also, maybe it could be used to remove some code redundancy elsewhere in the definition of the Dataset class? I have not checked if such a code redundancy exists.", "We're already replacing the user's input by default values automatically in other methods, it's fine to do it here as well and actually fits the library's style.\r\n\r\nNote that the case where it would reload the cache even if `keep_in_memory=True` is not implemented though, but it should be easy to add in `_select_with_indices_mapping`:\r\n- add keep_in_memory in `_new_dataset_with_indices` that uses InMemoryTable.from_file\r\n- inside `_select_with_indices_mapping` return the dataset from `_new_dataset_with_indices` if:\r\n - `keep_in_memory=True`\r\n - and `indices_cache_file_name` is not None and exists \r\n - and `is_caching_enabled()`\r\n\r\nBecause if we let it this way it would recreate the cache file unfortunately", "> We're already replacing the user's input by default values automatically in other methods, it's fine to do it here as well and actually fits the library's style.\r\n\r\nI think the fact that it's a style of the library is not really an argument in itself; however, after thinking through it several times, I think I know see why your solution is acceptable: as soon as the user specifies that 'keep_in_memory=True', they should not care anymore about the value of the '\\_indices_cache_file_name' variables, since from their point of view those are now irrelevant. So it's \"fine\" if we allow ourselves to modify the value of those variables, if it helps the internal code being more concise.\r\nStill, I find that it's a bit unintuitive, and a risk as far as future evolution of the method / of the code is concerned; someone tasked with doing that would need to have the knowledge of a lot of, if not all, the other methods of the class, in order to understand the potentially far-reaching impact of some modifications made to this portion of the code. But I guess that's a choice which is the library's owners to make. Also, if we use your proposed solution, as I explained, we can't get the benefit of potentially reusing possibly already existing cached data.\r\nOn that note...\r\n\r\n> Note that the case where it would reload the cache even if `keep_in_memory=True` is not implemented though\r\n\r\nI'm not sure what you mean here:\r\nWithin the current code trying to load up the potentially already existing split data, there is no trace of the 'keep_in_memory' variable. So why do you say that 'the case where it would reload the cache even if keep_in_memory=True is not implemented' (I assume that you mean 'currently implemented')? Surely, currently, this bit of code works regardless of the value of the 'keep_in_memory' variable', does it not?" ]
"2023-04-03T18:05:26"
"2023-04-06T17:17:27"
null
NONE
null
…ers values in 'train_test_split' + fix bad interaction between 'keep_in_memory' and 'cache_file_name' -related parameters (#5699)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5700/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5700/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5700", "html_url": "https://github.com/huggingface/datasets/pull/5700", "diff_url": "https://github.com/huggingface/datasets/pull/5700.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5700.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/5699
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5699/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5699/comments
https://api.github.com/repos/huggingface/datasets/issues/5699/events
https://github.com/huggingface/datasets/issues/5699
1,652,437,419
I_kwDODunzps5ifjGr
5,699
Issue when wanting to split in memory a cached dataset
{ "login": "FrancoisNoyez", "id": 47528215, "node_id": "MDQ6VXNlcjQ3NTI4MjE1", "avatar_url": "https://avatars.githubusercontent.com/u/47528215?v=4", "gravatar_id": "", "url": "https://api.github.com/users/FrancoisNoyez", "html_url": "https://github.com/FrancoisNoyez", "followers_url": "https://api.github.com/users/FrancoisNoyez/followers", "following_url": "https://api.github.com/users/FrancoisNoyez/following{/other_user}", "gists_url": "https://api.github.com/users/FrancoisNoyez/gists{/gist_id}", "starred_url": "https://api.github.com/users/FrancoisNoyez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FrancoisNoyez/subscriptions", "organizations_url": "https://api.github.com/users/FrancoisNoyez/orgs", "repos_url": "https://api.github.com/users/FrancoisNoyez/repos", "events_url": "https://api.github.com/users/FrancoisNoyez/events{/privacy}", "received_events_url": "https://api.github.com/users/FrancoisNoyez/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi ! Good catch, this is wrong indeed and thanks for opening a PR :)" ]
"2023-04-03T17:00:07"
"2023-04-04T16:52:42"
null
NONE
null
### Describe the bug **In the 'train_test_split' method of the Dataset class** (defined datasets/arrow_dataset.py), **if 'self.cache_files' is not empty**, then, **regarding the input parameters 'train_indices_cache_file_name' and 'test_indices_cache_file_name', if they are None**, we modify them to make them not None, to see if we can just provide back / work from cached data. But if we can't provide cached data, we move on with the call to the method, except those two values are not None anymore, which will conflict with the use of the 'keep_in_memory' parameter down the line. Indeed, at some point we end up calling the 'select' method, **and if 'keep_in_memory' is True**, since the value of this method's parameter 'indices_cache_file_name' is now not None anymore, **an exception is raised, whose message is "Please use either 'keep_in_memory' or 'indices_cache_file_name' but not both.".** Because of that, it's impossible to perform a train / test split of a cached dataset while requesting that the result not be cached. Which is inconvenient when one is just performing experiments, with no intention of caching the result. Aside from this being inconvenient, **the code which lead up to that situation seems simply wrong** to me: the input variable should not be modified so as to change the user's intention just to perform a test, if that test can fail and respecting the user's intention is necessary to proceed in that case. To fix this, I suggest to use other variables / other variable names, in order to host the value(s) needed to perform the test, so as not to change the originally input values needed by the rest of the method's code. Also, **I don't see why an exception should be raised when the 'select' method is called with both 'keep_in_memory'=True and 'indices_cache_file_name'!=None**: should the use of 'keep_in_memory' not prevail anyway, specifying that the user does not want to perform caching, and so making irrelevant the value of 'indices_cache_file_name'? This is indeed what happens when we look further in the code, in the '\_select_with_indices_mapping' method: when 'keep_in_memory' is True, then the value of indices_cache_file_name does not matter, the data will be written to a stream buffer anyway. Hence I suggest to remove the raising of exception in those circumstances. Notably, to remove the raising of it in the 'select', '\_select_with_indices_mapping', 'shuffle' and 'map' methods. ### Steps to reproduce the bug ```python import datasets def generate_examples(): for i in range(10): yield {"id": i} dataset_ = datasets.Dataset.from_generator( generate_examples, keep_in_memory=False, ) dataset_.train_test_split( test_size=3, shuffle=False, keep_in_memory=True, train_indices_cache_file_name=None, test_indices_cache_file_name=None, ) ``` ### Expected behavior The result of the above code should be a DatasetDict instance. Instead, we get the following exception stack: ```python --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[3], line 1 ----> 1 dataset_.train_test_split( 2 test_size=3, 3 shuffle=False, 4 keep_in_memory=True, 5 train_indices_cache_file_name=None, 6 test_indices_cache_file_name=None, 7 ) File ~/Work/Developments/datasets/src/datasets/arrow_dataset.py:528, in transmit_format.<locals>.wrapper(*args, **kwargs) 521 self_format = { 522 "type": self._format_type, 523 "format_kwargs": self._format_kwargs, 524 "columns": self._format_columns, 525 "output_all_columns": self._output_all_columns, 526 } 527 # apply actual function --> 528 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 529 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 530 # re-apply format to the output File ~/Work/Developments/datasets/src/datasets/fingerprint.py:511, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs) 507 validate_fingerprint(kwargs[fingerprint_name]) 509 # Call actual function --> 511 out = func(dataset, *args, **kwargs) 513 # Update fingerprint of in-place transforms + update in-place history of transforms 515 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails File ~/Work/Developments/datasets/src/datasets/arrow_dataset.py:4428, in Dataset.train_test_split(self, test_size, train_size, shuffle, stratify_by_column, seed, generator, keep_in_memory, load_from_cache_file, train_indices_cache_file_name, test_indices_cache_file_name, writer_batch_size, train_new_fingerprint, test_new_fingerprint) 4425 test_indices = permutation[:n_test] 4426 train_indices = permutation[n_test : (n_test + n_train)] -> 4428 train_split = self.select( 4429 indices=train_indices, 4430 keep_in_memory=keep_in_memory, 4431 indices_cache_file_name=train_indices_cache_file_name, 4432 writer_batch_size=writer_batch_size, 4433 new_fingerprint=train_new_fingerprint, 4434 ) 4435 test_split = self.select( 4436 indices=test_indices, 4437 keep_in_memory=keep_in_memory, (...) 4440 new_fingerprint=test_new_fingerprint, 4441 ) 4443 return DatasetDict({"train": train_split, "test": test_split}) File ~/Work/Developments/datasets/src/datasets/arrow_dataset.py:528, in transmit_format.<locals>.wrapper(*args, **kwargs) 521 self_format = { 522 "type": self._format_type, 523 "format_kwargs": self._format_kwargs, 524 "columns": self._format_columns, 525 "output_all_columns": self._output_all_columns, 526 } 527 # apply actual function --> 528 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 529 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 530 # re-apply format to the output File ~/Work/Developments/datasets/src/datasets/fingerprint.py:511, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs) 507 validate_fingerprint(kwargs[fingerprint_name]) 509 # Call actual function --> 511 out = func(dataset, *args, **kwargs) 513 # Update fingerprint of in-place transforms + update in-place history of transforms 515 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails File ~/Work/Developments/datasets/src/datasets/arrow_dataset.py:3679, in Dataset.select(self, indices, keep_in_memory, indices_cache_file_name, writer_batch_size, new_fingerprint) 3645 """Create a new dataset with rows selected following the list/array of indices. 3646 3647 Args: (...) 3676 ``` 3677 """ 3678 if keep_in_memory and indices_cache_file_name is not None: -> 3679 raise ValueError("Please use either `keep_in_memory` or `indices_cache_file_name` but not both.") 3681 if len(self.list_indexes()) > 0: 3682 raise DatasetTransformationNotAllowedError( 3683 "Using `.select` on a dataset with attached indexes is not allowed. You can first run `.drop_index() to remove your index and then re-add it." 3684 ) ValueError: Please use either `keep_in_memory` or `indices_cache_file_name` but not both. ``` ### Environment info - `datasets` version: 2.11.1.dev0 - Platform: Linux-5.4.236-1-MANJARO-x86_64-with-glibc2.2.5 - Python version: 3.8.12 - Huggingface_hub version: 0.13.3 - PyArrow version: 11.0.0 - Pandas version: 2.0.0 *** *** EDIT: Now with a pull request to fix this [here](https://github.com/huggingface/datasets/pull/5700)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5699/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5699/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5698
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5698/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5698/comments
https://api.github.com/repos/huggingface/datasets/issues/5698/events
https://github.com/huggingface/datasets/issues/5698
1,652,183,611
I_kwDODunzps5ielI7
5,698
Add Qdrant as another search index
{ "login": "kacperlukawski", "id": 2649301, "node_id": "MDQ6VXNlcjI2NDkzMDE=", "avatar_url": "https://avatars.githubusercontent.com/u/2649301?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kacperlukawski", "html_url": "https://github.com/kacperlukawski", "followers_url": "https://api.github.com/users/kacperlukawski/followers", "following_url": "https://api.github.com/users/kacperlukawski/following{/other_user}", "gists_url": "https://api.github.com/users/kacperlukawski/gists{/gist_id}", "starred_url": "https://api.github.com/users/kacperlukawski/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kacperlukawski/subscriptions", "organizations_url": "https://api.github.com/users/kacperlukawski/orgs", "repos_url": "https://api.github.com/users/kacperlukawski/repos", "events_url": "https://api.github.com/users/kacperlukawski/events{/privacy}", "received_events_url": "https://api.github.com/users/kacperlukawski/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "@mariosasko I'd appreciate your feedback on this. " ]
"2023-04-03T14:25:19"
"2023-04-11T10:28:40"
null
CONTRIBUTOR
null
### Feature request I'd suggest adding Qdrant (https://qdrant.tech) as another search index available, so users can directly build an index from a dataset. Currently, FAISS and ElasticSearch are only supported: https://huggingface.co/docs/datasets/faiss_es ### Motivation ElasticSearch is a keyword-based search system, while FAISS is a vector search library. Vector database, such as Qdrant, is a different tool based on similarity (like FAISS) but is not limited to a single machine. It makes the vector database well-suited for bigger datasets and collaboration if several people want to access a particular dataset. ### Your contribution I can provide a PR implementing that functionality on my own.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5698/reactions", "total_count": 6, "+1": 6, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5698/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5697
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5697/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5697/comments
https://api.github.com/repos/huggingface/datasets/issues/5697/events
https://github.com/huggingface/datasets/pull/5697
1,651,812,614
PR_kwDODunzps5NefxZ
5,697
Raise an error on missing distributed seed
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009644 / 0.011353 (-0.001709) | 0.006407 / 0.011008 (-0.004601) | 0.148353 / 0.038508 (0.109845) | 0.037537 / 0.023109 (0.014428) | 0.379697 / 0.275898 (0.103799) | 0.466260 / 0.323480 (0.142780) | 0.007884 / 0.007986 (-0.000102) | 0.005140 / 0.004328 (0.000812) | 0.111078 / 0.004250 (0.106827) | 0.049429 / 0.037052 (0.012377) | 0.364766 / 0.258489 (0.106277) | 0.453809 / 0.293841 (0.159968) | 0.051918 / 0.128546 (-0.076628) | 0.020081 / 0.075646 (-0.055566) | 0.616041 / 0.419271 (0.196770) | 0.059834 / 0.043533 (0.016301) | 0.373104 / 0.255139 (0.117965) | 0.419304 / 0.283200 (0.136104) | 0.113526 / 0.141683 (-0.028156) | 1.827160 / 1.452155 (0.375006) | 1.912092 / 1.492716 (0.419376) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.269584 / 0.018006 (0.251578) | 0.554100 / 0.000490 (0.553610) | 0.006618 / 0.000200 (0.006418) | 0.000093 / 0.000054 (0.000039) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025280 / 0.037411 (-0.012131) | 0.123116 / 0.014526 (0.108591) | 0.127674 / 0.176557 (-0.048883) | 0.189106 / 0.737135 (-0.548030) | 0.142072 / 0.296338 (-0.154267) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.602201 / 0.215209 (0.386992) | 5.959610 / 2.077655 (3.881956) | 2.404856 / 1.504120 (0.900736) | 2.175017 / 1.541195 (0.633823) | 2.154360 / 1.468490 (0.685870) | 1.265339 / 4.584777 (-3.319438) | 5.598429 / 3.745712 (1.852716) | 5.130249 / 5.269862 (-0.139612) | 2.764922 / 4.565676 (-1.800754) | 0.143232 / 0.424275 (-0.281043) | 0.014721 / 0.007607 (0.007114) | 0.764734 / 0.226044 (0.538689) | 7.518810 / 2.268929 (5.249882) | 3.344734 / 55.444624 (-52.099890) | 2.601158 / 6.876477 (-4.275319) | 2.726018 / 2.142072 (0.583945) | 1.397918 / 4.805227 (-3.407309) | 0.253277 / 6.500664 (-6.247387) | 0.077772 / 0.075469 (0.002303) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.499535 / 1.841788 (-0.342253) | 17.782490 / 8.074308 (9.708182) | 21.953064 / 10.191392 (11.761672) | 0.248753 / 0.680424 (-0.431671) | 0.029194 / 0.534201 (-0.505007) | 0.529700 / 0.579283 (-0.049583) | 0.618412 / 0.434364 (0.184048) | 0.605062 / 0.540337 (0.064725) | 0.725661 / 1.386936 (-0.661275) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009489 / 0.011353 (-0.001864) | 0.006423 / 0.011008 (-0.004585) | 0.096789 / 0.038508 (0.058281) | 0.034639 / 0.023109 (0.011530) | 0.403875 / 0.275898 (0.127977) | 0.439368 / 0.323480 (0.115888) | 0.006354 / 0.007986 (-0.001631) | 0.006794 / 0.004328 (0.002466) | 0.095537 / 0.004250 (0.091287) | 0.047749 / 0.037052 (0.010697) | 0.424157 / 0.258489 (0.165668) | 0.487825 / 0.293841 (0.193984) | 0.054675 / 0.128546 (-0.073872) | 0.021349 / 0.075646 (-0.054297) | 0.108917 / 0.419271 (-0.310354) | 0.075891 / 0.043533 (0.032358) | 0.412889 / 0.255139 (0.157750) | 0.464512 / 0.283200 (0.181312) | 0.118832 / 0.141683 (-0.022850) | 1.721215 / 1.452155 (0.269060) | 1.857195 / 1.492716 (0.364478) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.248308 / 0.018006 (0.230302) | 0.559496 / 0.000490 (0.559006) | 0.007136 / 0.000200 (0.006936) | 0.000160 / 0.000054 (0.000106) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031772 / 0.037411 (-0.005639) | 0.123565 / 0.014526 (0.109039) | 0.132660 / 0.176557 (-0.043896) | 0.201428 / 0.737135 (-0.535707) | 0.135238 / 0.296338 (-0.161101) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.646978 / 0.215209 (0.431769) | 6.183477 / 2.077655 (4.105822) | 2.782117 / 1.504120 (1.277997) | 2.294093 / 1.541195 (0.752898) | 2.346932 / 1.468490 (0.878442) | 1.239085 / 4.584777 (-3.345692) | 5.696364 / 3.745712 (1.950652) | 4.980102 / 5.269862 (-0.289759) | 2.278116 / 4.565676 (-2.287560) | 0.157339 / 0.424275 (-0.266936) | 0.014936 / 0.007607 (0.007329) | 0.778001 / 0.226044 (0.551957) | 7.708066 / 2.268929 (5.439138) | 3.412235 / 55.444624 (-52.032389) | 2.670670 / 6.876477 (-4.205806) | 2.731802 / 2.142072 (0.589730) | 1.446516 / 4.805227 (-3.358712) | 0.263689 / 6.500664 (-6.236975) | 0.086359 / 0.075469 (0.010890) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.573169 / 1.841788 (-0.268619) | 17.690842 / 8.074308 (9.616534) | 20.343336 / 10.191392 (10.151944) | 0.231028 / 0.680424 (-0.449396) | 0.025954 / 0.534201 (-0.508247) | 0.570554 / 0.579283 (-0.008729) | 0.610453 / 0.434364 (0.176089) | 0.675830 / 0.540337 (0.135493) | 0.790650 / 1.386936 (-0.596286) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d094ed07823bfb3271f3a9006daa1f92a64967a5 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007553 / 0.011353 (-0.003800) | 0.005426 / 0.011008 (-0.005582) | 0.096550 / 0.038508 (0.058042) | 0.034393 / 0.023109 (0.011284) | 0.322297 / 0.275898 (0.046399) | 0.340943 / 0.323480 (0.017463) | 0.006350 / 0.007986 (-0.001635) | 0.005700 / 0.004328 (0.001372) | 0.074929 / 0.004250 (0.070678) | 0.054819 / 0.037052 (0.017767) | 0.320151 / 0.258489 (0.061662) | 0.346957 / 0.293841 (0.053116) | 0.036659 / 0.128546 (-0.091887) | 0.012443 / 0.075646 (-0.063204) | 0.332232 / 0.419271 (-0.087040) | 0.051467 / 0.043533 (0.007934) | 0.310952 / 0.255139 (0.055813) | 0.325617 / 0.283200 (0.042417) | 0.104908 / 0.141683 (-0.036775) | 1.446752 / 1.452155 (-0.005403) | 1.558773 / 1.492716 (0.066056) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.300639 / 0.018006 (0.282633) | 0.499901 / 0.000490 (0.499411) | 0.007340 / 0.000200 (0.007140) | 0.000255 / 0.000054 (0.000201) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027206 / 0.037411 (-0.010206) | 0.105603 / 0.014526 (0.091077) | 0.118669 / 0.176557 (-0.057887) | 0.174050 / 0.737135 (-0.563086) | 0.125099 / 0.296338 (-0.171239) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.404285 / 0.215209 (0.189076) | 4.034587 / 2.077655 (1.956933) | 1.812639 / 1.504120 (0.308519) | 1.625745 / 1.541195 (0.084551) | 1.735523 / 1.468490 (0.267033) | 0.709699 / 4.584777 (-3.875078) | 3.802196 / 3.745712 (0.056484) | 3.656984 / 5.269862 (-1.612877) | 1.968470 / 4.565676 (-2.597206) | 0.086612 / 0.424275 (-0.337663) | 0.012368 / 0.007607 (0.004761) | 0.502622 / 0.226044 (0.276577) | 5.017876 / 2.268929 (2.748948) | 2.279794 / 55.444624 (-53.164831) | 1.956938 / 6.876477 (-4.919538) | 2.150430 / 2.142072 (0.008357) | 0.847691 / 4.805227 (-3.957536) | 0.170157 / 6.500664 (-6.330507) | 0.064141 / 0.075469 (-0.011328) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.172246 / 1.841788 (-0.669542) | 15.229444 / 8.074308 (7.155136) | 14.715913 / 10.191392 (4.524521) | 0.192501 / 0.680424 (-0.487923) | 0.017972 / 0.534201 (-0.516229) | 0.423834 / 0.579283 (-0.155449) | 0.423019 / 0.434364 (-0.011345) | 0.493298 / 0.540337 (-0.047039) | 0.589833 / 1.386936 (-0.797103) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007773 / 0.011353 (-0.003580) | 0.005449 / 0.011008 (-0.005560) | 0.075180 / 0.038508 (0.036672) | 0.035221 / 0.023109 (0.012111) | 0.338169 / 0.275898 (0.062271) | 0.374002 / 0.323480 (0.050522) | 0.006391 / 0.007986 (-0.001595) | 0.004406 / 0.004328 (0.000078) | 0.074925 / 0.004250 (0.070675) | 0.056527 / 0.037052 (0.019475) | 0.338071 / 0.258489 (0.079582) | 0.391882 / 0.293841 (0.098041) | 0.037241 / 0.128546 (-0.091305) | 0.012546 / 0.075646 (-0.063100) | 0.087331 / 0.419271 (-0.331940) | 0.049851 / 0.043533 (0.006318) | 0.335264 / 0.255139 (0.080125) | 0.354813 / 0.283200 (0.071614) | 0.110614 / 0.141683 (-0.031069) | 1.432782 / 1.452155 (-0.019372) | 1.548800 / 1.492716 (0.056083) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.307892 / 0.018006 (0.289886) | 0.518809 / 0.000490 (0.518319) | 0.004058 / 0.000200 (0.003858) | 0.000099 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029155 / 0.037411 (-0.008256) | 0.111706 / 0.014526 (0.097180) | 0.122964 / 0.176557 (-0.053592) | 0.170939 / 0.737135 (-0.566196) | 0.128538 / 0.296338 (-0.167801) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426529 / 0.215209 (0.211320) | 4.254218 / 2.077655 (2.176563) | 2.011455 / 1.504120 (0.507335) | 1.817397 / 1.541195 (0.276202) | 1.952915 / 1.468490 (0.484425) | 0.705052 / 4.584777 (-3.879725) | 3.844458 / 3.745712 (0.098746) | 3.592754 / 5.269862 (-1.677107) | 1.573567 / 4.565676 (-2.992109) | 0.086834 / 0.424275 (-0.337441) | 0.012389 / 0.007607 (0.004782) | 0.541695 / 0.226044 (0.315650) | 5.224492 / 2.268929 (2.955564) | 2.473648 / 55.444624 (-52.970976) | 2.167458 / 6.876477 (-4.709019) | 2.253319 / 2.142072 (0.111246) | 0.836322 / 4.805227 (-3.968905) | 0.168680 / 6.500664 (-6.331984) | 0.065699 / 0.075469 (-0.009770) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.281886 / 1.841788 (-0.559902) | 15.451741 / 8.074308 (7.377433) | 14.906870 / 10.191392 (4.715478) | 0.168554 / 0.680424 (-0.511870) | 0.017365 / 0.534201 (-0.516836) | 0.434183 / 0.579283 (-0.145100) | 0.421891 / 0.434364 (-0.012473) | 0.538993 / 0.540337 (-0.001344) | 0.636212 / 1.386936 (-0.750724) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1f428b8172319a6bfe95d7a4356b1d14a8d386d8 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007362 / 0.011353 (-0.003991) | 0.004992 / 0.011008 (-0.006016) | 0.098730 / 0.038508 (0.060222) | 0.033673 / 0.023109 (0.010563) | 0.296334 / 0.275898 (0.020436) | 0.328208 / 0.323480 (0.004728) | 0.005658 / 0.007986 (-0.002327) | 0.004130 / 0.004328 (-0.000199) | 0.074596 / 0.004250 (0.070346) | 0.048230 / 0.037052 (0.011178) | 0.295631 / 0.258489 (0.037142) | 0.347176 / 0.293841 (0.053335) | 0.036359 / 0.128546 (-0.092187) | 0.011889 / 0.075646 (-0.063758) | 0.332889 / 0.419271 (-0.086382) | 0.049708 / 0.043533 (0.006175) | 0.291207 / 0.255139 (0.036068) | 0.311066 / 0.283200 (0.027867) | 0.098418 / 0.141683 (-0.043265) | 1.415450 / 1.452155 (-0.036705) | 1.526928 / 1.492716 (0.034212) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212636 / 0.018006 (0.194630) | 0.432337 / 0.000490 (0.431847) | 0.006839 / 0.000200 (0.006639) | 0.000205 / 0.000054 (0.000150) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026045 / 0.037411 (-0.011366) | 0.107427 / 0.014526 (0.092901) | 0.114634 / 0.176557 (-0.061922) | 0.169943 / 0.737135 (-0.567192) | 0.123290 / 0.296338 (-0.173048) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.409432 / 0.215209 (0.194223) | 4.097910 / 2.077655 (2.020255) | 1.857177 / 1.504120 (0.353057) | 1.672355 / 1.541195 (0.131160) | 1.740130 / 1.468490 (0.271640) | 0.706520 / 4.584777 (-3.878257) | 3.773606 / 3.745712 (0.027893) | 2.101635 / 5.269862 (-3.168226) | 1.326295 / 4.565676 (-3.239382) | 0.085672 / 0.424275 (-0.338604) | 0.012142 / 0.007607 (0.004534) | 0.501168 / 0.226044 (0.275123) | 5.049784 / 2.268929 (2.780855) | 2.322477 / 55.444624 (-53.122148) | 1.990105 / 6.876477 (-4.886372) | 2.115003 / 2.142072 (-0.027070) | 0.837518 / 4.805227 (-3.967709) | 0.168457 / 6.500664 (-6.332207) | 0.064622 / 0.075469 (-0.010847) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.188152 / 1.841788 (-0.653635) | 14.991585 / 8.074308 (6.917276) | 14.635187 / 10.191392 (4.443795) | 0.183708 / 0.680424 (-0.496716) | 0.017452 / 0.534201 (-0.516749) | 0.418963 / 0.579283 (-0.160320) | 0.428893 / 0.434364 (-0.005471) | 0.502108 / 0.540337 (-0.038229) | 0.596345 / 1.386936 (-0.790591) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007404 / 0.011353 (-0.003949) | 0.005148 / 0.011008 (-0.005860) | 0.074785 / 0.038508 (0.036277) | 0.033815 / 0.023109 (0.010706) | 0.332752 / 0.275898 (0.056854) | 0.368018 / 0.323480 (0.044538) | 0.005642 / 0.007986 (-0.002344) | 0.004041 / 0.004328 (-0.000287) | 0.073455 / 0.004250 (0.069205) | 0.047380 / 0.037052 (0.010328) | 0.337017 / 0.258489 (0.078528) | 0.384185 / 0.293841 (0.090344) | 0.036592 / 0.128546 (-0.091954) | 0.012109 / 0.075646 (-0.063537) | 0.086862 / 0.419271 (-0.332410) | 0.049030 / 0.043533 (0.005497) | 0.336542 / 0.255139 (0.081403) | 0.350295 / 0.283200 (0.067096) | 0.100998 / 0.141683 (-0.040685) | 1.469749 / 1.452155 (0.017594) | 1.588355 / 1.492716 (0.095639) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227552 / 0.018006 (0.209546) | 0.438087 / 0.000490 (0.437598) | 0.000394 / 0.000200 (0.000194) | 0.000058 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030575 / 0.037411 (-0.006836) | 0.111914 / 0.014526 (0.097388) | 0.124583 / 0.176557 (-0.051973) | 0.175471 / 0.737135 (-0.561665) | 0.129535 / 0.296338 (-0.166803) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.425625 / 0.215209 (0.210416) | 4.228328 / 2.077655 (2.150673) | 2.021087 / 1.504120 (0.516967) | 1.832550 / 1.541195 (0.291355) | 1.925572 / 1.468490 (0.457082) | 0.690772 / 4.584777 (-3.894005) | 3.724900 / 3.745712 (-0.020813) | 2.080286 / 5.269862 (-3.189576) | 1.316854 / 4.565676 (-3.248822) | 0.085123 / 0.424275 (-0.339152) | 0.012078 / 0.007607 (0.004471) | 0.525802 / 0.226044 (0.299758) | 5.242598 / 2.268929 (2.973670) | 2.491596 / 55.444624 (-52.953028) | 2.125156 / 6.876477 (-4.751320) | 2.185922 / 2.142072 (0.043850) | 0.823116 / 4.805227 (-3.982111) | 0.165188 / 6.500664 (-6.335476) | 0.063970 / 0.075469 (-0.011499) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.256948 / 1.841788 (-0.584840) | 14.981990 / 8.074308 (6.907682) | 14.565266 / 10.191392 (4.373874) | 0.175064 / 0.680424 (-0.505360) | 0.017628 / 0.534201 (-0.516573) | 0.429979 / 0.579283 (-0.149304) | 0.422509 / 0.434364 (-0.011855) | 0.546262 / 0.540337 (0.005924) | 0.647103 / 1.386936 (-0.739833) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0803a006db1c395ac715662cc6079651f77c11ea \"CML watermark\")\n" ]
"2023-04-03T10:44:58"
"2023-04-04T15:05:24"
"2023-04-04T14:58:16"
MEMBER
null
close https://github.com/huggingface/datasets/issues/5696
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5697/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5697/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5697", "html_url": "https://github.com/huggingface/datasets/pull/5697", "diff_url": "https://github.com/huggingface/datasets/pull/5697.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5697.patch", "merged_at": "2023-04-04T14:58:16" }
true
https://api.github.com/repos/huggingface/datasets/issues/5696
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5696/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5696/comments
https://api.github.com/repos/huggingface/datasets/issues/5696/events
https://github.com/huggingface/datasets/issues/5696
1,651,707,008
I_kwDODunzps5icwyA
5,696
Shuffle a sharded iterable dataset without seed can lead to duplicate data
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[]
"2023-04-03T09:40:03"
"2023-04-04T14:58:18"
"2023-04-04T14:58:18"
MEMBER
null
As reported in https://github.com/huggingface/datasets/issues/5360 If `seed=None` in `.shuffle()`, shuffled datasets don't use the same shuffling seed across nodes. Because of that, the lists of shards is not shuffled the same way across nodes, and therefore some shards may be assigned to multiple nodes instead of exactly one. This can happen only when you have a number of shards that is a factor of the number of nodes. The current workaround is to always set a `seed` in `.shuffle()`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5696/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5696/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5695
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5695/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5695/comments
https://api.github.com/repos/huggingface/datasets/issues/5695/events
https://github.com/huggingface/datasets/issues/5695
1,650,974,156
I_kwDODunzps5iZ93M
5,695
Loading big dataset raises pyarrow.lib.ArrowNotImplementedError
{ "login": "amariucaitheodor", "id": 32778667, "node_id": "MDQ6VXNlcjMyNzc4NjY3", "avatar_url": "https://avatars.githubusercontent.com/u/32778667?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amariucaitheodor", "html_url": "https://github.com/amariucaitheodor", "followers_url": "https://api.github.com/users/amariucaitheodor/followers", "following_url": "https://api.github.com/users/amariucaitheodor/following{/other_user}", "gists_url": "https://api.github.com/users/amariucaitheodor/gists{/gist_id}", "starred_url": "https://api.github.com/users/amariucaitheodor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amariucaitheodor/subscriptions", "organizations_url": "https://api.github.com/users/amariucaitheodor/orgs", "repos_url": "https://api.github.com/users/amariucaitheodor/repos", "events_url": "https://api.github.com/users/amariucaitheodor/events{/privacy}", "received_events_url": "https://api.github.com/users/amariucaitheodor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! It looks like an issue with PyArrow: https://issues.apache.org/jira/browse/ARROW-5030\r\n\r\nIt appears it can happen when you have parquet files with row groups larger than 2GB.\r\nI can see that your parquet files are around 10GB. It is usually advised to keep a value around the default value 500MB to avoid these issues.\r\n\r\nNote that currently the row group size is simply defined by the number of rows `datasets.config.DEFAULT_MAX_BATCH_SIZE`, so reducing this value could let you have parquet files bigger than 2GB and with row groups lower than 2GB.\r\n\r\nWould it be possible for you to re-upload the dataset with the default shard size 500MB ?", "Hey, thanks for the reply! I've since switched to working with the locally-saved dataset (which works).\r\nMaybe it makes sense to show a warning for uploads with large shard sizes? Since the functionality completely breaks (due to the PyArrow bug).", "Just tried uploading the same dataset with 500MB shards, I get an errors 4 hours in:\r\n\r\n```\r\nPushing dataset shards to the dataset hub: 25%|██▍ | 358/1453 [4:40:31<14:18:00, 47.01s/it]\r\nTraceback (most recent call last):\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/huggingface_hub/_commit_api.py\", line 344, in _inner_upload_lfs_object\r\n return _upload_lfs_object(operation=operation, lfs_batch_action=batch_action, token=token)\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/huggingface_hub/_commit_api.py\", line 391, in _upload_lfs_object\r\n lfs_upload(\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/huggingface_hub/lfs.py\", line 254, in lfs_upload\r\n _upload_multi_part(\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/huggingface_hub/lfs.py\", line 374, in _upload_multi_part\r\n hf_raise_for_status(part_upload_res)\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py\", line 301, in hf_raise_for_status\r\n raise HfHubHTTPError(str(e), response=response) from e\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py\", line 46, in __init__\r\n server_data = response.json()\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/requests/models.py\", line 899, in json\r\n return complexjson.loads(\r\n File \"/cluster/work/cotterell/tamariucai/miniconda3/envs/torch-multimodal/lib/python3.8/json/__init__.py\", line 357, in loads\r\n return _default_decoder.decode(s)\r\n File \"/cluster/work/cotterell/tamariucai/miniconda3/envs/torch-multimodal/lib/python3.8/json/decoder.py\", line 337, in decode\r\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\r\n File \"/cluster/work/cotterell/tamariucai/miniconda3/envs/torch-multimodal/lib/python3.8/json/decoder.py\", line 355, in raw_decode\r\n raise JSONDecodeError(\"Expecting value\", s, err.value) from None\r\njson.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"process_wit.py\", line 146, in <module>\r\n dataset.push_to_hub(FINAL_PATH, max_shard_size=\"500MB\", private=False)\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/datasets/dataset_dict.py\", line 1534, in push_to_hub\r\n repo_id, split, uploaded_size, dataset_nbytes, _, _ = self[split]._push_parquet_shards_to_hub(\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 4804, in _push_parquet_shards_to_hub\r\n _retry(\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py\", line 281, in _retry\r\n return func(*func_args, **func_kwargs)\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py\", line 120, in _inner_fn\r\n return fn(*args, **kwargs)\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/huggingface_hub/hf_api.py\", line 2593, in upload_file\r\n commit_info = self.create_commit(\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py\", line 120, in _inner_fn\r\n return fn(*args, **kwargs)\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/huggingface_hub/hf_api.py\", line 2411, in create_commit\r\n upload_lfs_files(\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py\", line 120, in _inner_fn\r\n return fn(*args, **kwargs)\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/huggingface_hub/_commit_api.py\", line 351, in upload_lfs_files\r\n thread_map(\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/tqdm/contrib/concurrent.py\", line 69, in thread_map\r\n return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/tqdm/contrib/concurrent.py\", line 51, in _executor_map\r\n return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs))\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/tqdm/std.py\", line 1178, in __iter__\r\n for obj in iterable:\r\n File \"/cluster/work/cotterell/tamariucai/miniconda3/envs/torch-multimodal/lib/python3.8/concurrent/futures/_base.py\", line 619, in result_iterator\r\n yield fs.pop().result()\r\n File \"/cluster/work/cotterell/tamariucai/miniconda3/envs/torch-multimodal/lib/python3.8/concurrent/futures/_base.py\", line 444, in result\r\n return self.__get_result()\r\n File \"/cluster/work/cotterell/tamariucai/miniconda3/envs/torch-multimodal/lib/python3.8/concurrent/futures/_base.py\", line 389, in __get_result\r\n raise self._exception\r\n File \"/cluster/work/cotterell/tamariucai/miniconda3/envs/torch-multimodal/lib/python3.8/concurrent/futures/thread.py\", line 57, in run\r\n result = self.fn(*self.args, **self.kwargs)\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/huggingface_hub/_commit_api.py\", line 346, in _inner_upload_lfs_object\r\n raise RuntimeError(f\"Error while uploading '{operation.path_in_repo}' to the Hub.\") from exc\r\nRuntimeError: Error while uploading 'data/train-00358-of-01453-22a5cc8b3eb12be3.parquet' to the Hub.\r\n```\r\nLocal saves do work, however.", "Hmmm that was probably an intermitent bug, you can resume the upload by re-running push_to_hub", "Leaving this other error here for the record, which occurs when I load the +700GB dataset from the hub with shard sizes of 500MB:\r\n\r\n```\r\n Traceback (most recent call last): \r\n File \"/cluster/home/tamariucai/.local/lib/python3.10/site-packages/datasets/builder.py\", line 1860, in _prepare_split_single\r\n for _, table in generator:\r\n File \"/cluster/home/tamariucai/.local/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py\", line 69, in _generate_tables\r\n for batch_idx, record_batch in enumerate(\r\n File \"pyarrow/_parquet.pyx\", line 1323, in iter_batches\r\n File \"pyarrow/error.pxi\", line 115, in pyarrow.lib.check_status\r\nOSError: Corrupt snappy compressed data.\r\n```\r\nI will probably switch back to the local big dataset or shrink it." ]
"2023-04-02T14:42:44"
"2023-04-11T09:17:54"
"2023-04-10T08:04:04"
NONE
null
### Describe the bug Calling `datasets.load_dataset` to load the (publicly available) dataset `theodor1289/wit` fails with `pyarrow.lib.ArrowNotImplementedError`. ### Steps to reproduce the bug Steps to reproduce this behavior: 1. `!pip install datasets` 2. `!huggingface-cli login` 3. This step will throw the error (it might take a while as the dataset has ~170GB): ```python from datasets import load_dataset dataset = load_dataset("theodor1289/wit", "train", use_auth_token=True) ``` Stack trace: ``` (torch-multimodal) bash-4.2$ python test.py Downloading and preparing dataset None/None to /cluster/work/cotterell/tamariucai/HuggingfaceDatasets/theodor1289___parquet/theodor1289--wit-7a3e984414a86a0f/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec... Downloading data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 491.68it/s] Extracting data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 16.93it/s] Traceback (most recent call last): File "/cluster/home/tamariucai/.local/lib/python3.10/site-packages/datasets/builder.py", line 1860, in _prepare_split_single for _, table in generator: File "/cluster/home/tamariucai/.local/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py", line 69, in _generate_tables for batch_idx, record_batch in enumerate( File "pyarrow/_parquet.pyx", line 1323, in iter_batches File "pyarrow/error.pxi", line 121, in pyarrow.lib.check_status pyarrow.lib.ArrowNotImplementedError: Nested data conversions not implemented for chunked array outputs The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/cluster/work/cotterell/tamariucai/multimodal-mirror/examples/test.py", line 2, in <module> dataset = load_dataset("theodor1289/wit", "train", use_auth_token=True) File "/cluster/home/tamariucai/.local/lib/python3.10/site-packages/datasets/load.py", line 1791, in load_dataset builder_instance.download_and_prepare( File "/cluster/home/tamariucai/.local/lib/python3.10/site-packages/datasets/builder.py", line 891, in download_and_prepare self._download_and_prepare( File "/cluster/home/tamariucai/.local/lib/python3.10/site-packages/datasets/builder.py", line 986, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/cluster/home/tamariucai/.local/lib/python3.10/site-packages/datasets/builder.py", line 1748, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/cluster/home/tamariucai/.local/lib/python3.10/site-packages/datasets/builder.py", line 1893, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.builder.DatasetGenerationError: An error occurred while generating the dataset ``` ### Expected behavior The dataset is loaded in variable `dataset`. ### Environment info - `datasets` version: 2.11.0 - Platform: Linux-3.10.0-1160.80.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.10.4 - Huggingface_hub version: 0.13.3 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5695/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5695/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5694
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5694/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5694/comments
https://api.github.com/repos/huggingface/datasets/issues/5694/events
https://github.com/huggingface/datasets/issues/5694
1,650,467,793
I_kwDODunzps5iYCPR
5,694
Dataset configuration
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 2067400324, "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion", "name": "generic discussion", "color": "c5def5", "default": false, "description": "Generic discussion on the library" } ]
open
false
null
[]
null
[ "Originally we also though about adding it to the YAML part of the README.md:\r\n\r\n```yaml\r\nbuilder_config:\r\n data_dir: data\r\n data_files:\r\n - split: train\r\n pattern: \"train-[0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*\"\r\n```\r\n\r\nHaving it in the README.md could make it easier to modify it in the UI on HF, and for validation on commit", "From internal discussions we agreed to go with the YAML approach, since it's the one that seems more appropriate to be modified by a human on the Hub or locally (while JSON e.g. for models are usually created programmatically).", "Current format:\r\n```yaml\r\nbuilder_config:\r\n data_files:\r\n - split: train\r\n pattern: data/train-*\r\n```" ]
"2023-04-01T13:08:05"
"2023-04-04T14:54:37"
null
MEMBER
null
Following discussions from https://github.com/huggingface/datasets/pull/5331 We could have something like `config.json` to define the configuration of a dataset. ```json { "data_dir": "data" "data_files": { "train": "train-[0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*" } } ``` we could also support a list for several configs with a 'config_name' field. The alternative was to use YAML in the README.md. I think it could also support a `dataset_type` field to specify which dataset builder class to use, and the other parameters would be the builder's parameters. Some parameters exist for all builders like `data_files` and `data_dir`, but some parameters are builder specific like `sep` for csv. This format would be used in `push_to_hub` to be able to push multiple configs. cc @huggingface/datasets EDIT: actually we're going for the YAML approach in README.md
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5694/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5694/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5693
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5693/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5693/comments
https://api.github.com/repos/huggingface/datasets/issues/5693/events
https://github.com/huggingface/datasets/pull/5693
1,649,934,749
PR_kwDODunzps5NYdPS
5,693
[docs] Split pattern search order
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007841 / 0.011353 (-0.003512) | 0.005640 / 0.011008 (-0.005368) | 0.096465 / 0.038508 (0.057957) | 0.036476 / 0.023109 (0.013367) | 0.306431 / 0.275898 (0.030533) | 0.339545 / 0.323480 (0.016065) | 0.006064 / 0.007986 (-0.001922) | 0.004404 / 0.004328 (0.000076) | 0.073130 / 0.004250 (0.068879) | 0.052765 / 0.037052 (0.015713) | 0.309895 / 0.258489 (0.051406) | 0.354037 / 0.293841 (0.060196) | 0.037127 / 0.128546 (-0.091420) | 0.012387 / 0.075646 (-0.063260) | 0.333503 / 0.419271 (-0.085769) | 0.059799 / 0.043533 (0.016266) | 0.305496 / 0.255139 (0.050358) | 0.324122 / 0.283200 (0.040922) | 0.107007 / 0.141683 (-0.034676) | 1.416743 / 1.452155 (-0.035411) | 1.520772 / 1.492716 (0.028055) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.261233 / 0.018006 (0.243227) | 0.573806 / 0.000490 (0.573316) | 0.000390 / 0.000200 (0.000190) | 0.000058 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027672 / 0.037411 (-0.009740) | 0.112803 / 0.014526 (0.098278) | 0.121085 / 0.176557 (-0.055471) | 0.176056 / 0.737135 (-0.561080) | 0.127171 / 0.296338 (-0.169167) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.414756 / 0.215209 (0.199547) | 4.148743 / 2.077655 (2.071088) | 1.883940 / 1.504120 (0.379820) | 1.698771 / 1.541195 (0.157576) | 1.811926 / 1.468490 (0.343436) | 0.708293 / 4.584777 (-3.876484) | 3.780456 / 3.745712 (0.034744) | 2.098556 / 5.269862 (-3.171306) | 1.323512 / 4.565676 (-3.242164) | 0.086253 / 0.424275 (-0.338022) | 0.012587 / 0.007607 (0.004980) | 0.514824 / 0.226044 (0.288779) | 5.157415 / 2.268929 (2.888487) | 2.382519 / 55.444624 (-53.062105) | 2.014539 / 6.876477 (-4.861938) | 2.215239 / 2.142072 (0.073166) | 0.847178 / 4.805227 (-3.958049) | 0.170053 / 6.500664 (-6.330611) | 0.066461 / 0.075469 (-0.009008) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.199056 / 1.841788 (-0.642732) | 15.244999 / 8.074308 (7.170691) | 14.661593 / 10.191392 (4.470201) | 0.168855 / 0.680424 (-0.511569) | 0.017889 / 0.534201 (-0.516312) | 0.424961 / 0.579283 (-0.154322) | 0.428632 / 0.434364 (-0.005732) | 0.502680 / 0.540337 (-0.037658) | 0.597827 / 1.386936 (-0.789109) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007749 / 0.011353 (-0.003604) | 0.005527 / 0.011008 (-0.005482) | 0.074774 / 0.038508 (0.036266) | 0.035367 / 0.023109 (0.012258) | 0.340594 / 0.275898 (0.064696) | 0.373970 / 0.323480 (0.050490) | 0.006094 / 0.007986 (-0.001892) | 0.004428 / 0.004328 (0.000100) | 0.074120 / 0.004250 (0.069869) | 0.054852 / 0.037052 (0.017800) | 0.357173 / 0.258489 (0.098684) | 0.388877 / 0.293841 (0.095036) | 0.037002 / 0.128546 (-0.091545) | 0.012337 / 0.075646 (-0.063309) | 0.086962 / 0.419271 (-0.332310) | 0.050370 / 0.043533 (0.006837) | 0.342989 / 0.255139 (0.087850) | 0.358065 / 0.283200 (0.074865) | 0.111063 / 0.141683 (-0.030620) | 1.516704 / 1.452155 (0.064549) | 1.634359 / 1.492716 (0.141643) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.261493 / 0.018006 (0.243487) | 0.566288 / 0.000490 (0.565799) | 0.000439 / 0.000200 (0.000239) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030426 / 0.037411 (-0.006985) | 0.114606 / 0.014526 (0.100080) | 0.126134 / 0.176557 (-0.050423) | 0.175324 / 0.737135 (-0.561812) | 0.132766 / 0.296338 (-0.163573) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426785 / 0.215209 (0.211576) | 4.243555 / 2.077655 (2.165900) | 2.089631 / 1.504120 (0.585511) | 1.994562 / 1.541195 (0.453367) | 2.140284 / 1.468490 (0.671794) | 0.698645 / 4.584777 (-3.886132) | 3.807471 / 3.745712 (0.061759) | 3.275343 / 5.269862 (-1.994519) | 1.796756 / 4.565676 (-2.768921) | 0.085986 / 0.424275 (-0.338289) | 0.012213 / 0.007607 (0.004606) | 0.536815 / 0.226044 (0.310771) | 5.344611 / 2.268929 (3.075683) | 2.498578 / 55.444624 (-52.946047) | 2.153260 / 6.876477 (-4.723217) | 2.251310 / 2.142072 (0.109237) | 0.839104 / 4.805227 (-3.966123) | 0.169639 / 6.500664 (-6.331025) | 0.065880 / 0.075469 (-0.009589) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.268610 / 1.841788 (-0.573178) | 15.624915 / 8.074308 (7.550606) | 15.163684 / 10.191392 (4.972292) | 0.172992 / 0.680424 (-0.507432) | 0.018154 / 0.534201 (-0.516047) | 0.440485 / 0.579283 (-0.138798) | 0.431949 / 0.434364 (-0.002415) | 0.547935 / 0.540337 (0.007597) | 0.662442 / 1.386936 (-0.724494) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5c8a6ba43c4aaa0ca0665d8dadd87ef33e28e8e4 \"CML watermark\")\n" ]
"2023-03-31T19:51:38"
"2023-04-03T18:43:30"
"2023-04-03T18:29:58"
MEMBER
null
This PR addresses #5681 about the order of split patterns 🤗 Datasets searches for when generating dataset splits.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5693/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5693/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5693", "html_url": "https://github.com/huggingface/datasets/pull/5693", "diff_url": "https://github.com/huggingface/datasets/pull/5693.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5693.patch", "merged_at": "2023-04-03T18:29:58" }
true
https://api.github.com/repos/huggingface/datasets/issues/5692
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5692/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5692/comments
https://api.github.com/repos/huggingface/datasets/issues/5692/events
https://github.com/huggingface/datasets/issues/5692
1,649,818,644
I_kwDODunzps5iVjwU
5,692
pyarrow.lib.ArrowInvalid: Unable to merge: Field <field> has incompatible types
{ "login": "cyanic-selkie", "id": 32219669, "node_id": "MDQ6VXNlcjMyMjE5NjY5", "avatar_url": "https://avatars.githubusercontent.com/u/32219669?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cyanic-selkie", "html_url": "https://github.com/cyanic-selkie", "followers_url": "https://api.github.com/users/cyanic-selkie/followers", "following_url": "https://api.github.com/users/cyanic-selkie/following{/other_user}", "gists_url": "https://api.github.com/users/cyanic-selkie/gists{/gist_id}", "starred_url": "https://api.github.com/users/cyanic-selkie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cyanic-selkie/subscriptions", "organizations_url": "https://api.github.com/users/cyanic-selkie/orgs", "repos_url": "https://api.github.com/users/cyanic-selkie/repos", "events_url": "https://api.github.com/users/cyanic-selkie/events{/privacy}", "received_events_url": "https://api.github.com/users/cyanic-selkie/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi! The link pointing to the code that generated the dataset is broken. Can you please fix it to make debugging easier?", "> Hi! The link pointing to the code that generated the dataset is broken. Can you please fix it to make debugging easier?\r\n\r\nSorry about that, it's fixed now.\r\n", "@cyanic-selkie could you explain how you fixed it? I met the same error in loading other datasets, is it due to the version of the library enviroment? ", "@MingsYang I never fixed it. If you're referring to my comment above, I only meant I fixed the link to my code.\r\n\r\nAnyway, I managed to work around the issue by using `streaming` when loading the dataset.", "@cyanic-selkie Emm, I get it. I just tried to use a new version python enviroment, and it show no errors anymore.", "Upgrade pyarrow to the latest version solves this problem in my case." ]
"2023-03-31T18:19:40"
"2024-01-14T07:24:21"
null
NONE
null
### Describe the bug When loading the dataset [wikianc-en](https://huggingface.co/datasets/cyanic-selkie/wikianc-en) which I created using [this](https://github.com/cyanic-selkie/wikianc) code, I get the following error: ``` Traceback (most recent call last): File "/home/sven/code/rector/answer-detection/train.py", line 106, in <module> (dataset, weights) = get_dataset(args.dataset, tokenizer, labels, args.padding) File "/home/sven/code/rector/answer-detection/dataset.py", line 106, in get_dataset dataset = load_dataset("cyanic-selkie/wikianc-en") File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/load.py", line 1794, in load_dataset ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory) File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/builder.py", line 1106, in as_dataset datasets = map_nested( File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 443, in map_nested mapped = [ File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 444, in <listcomp> _single_map_nested((function, obj, types, None, True, None)) File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 346, in _single_map_nested return function(data_struct) File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/builder.py", line 1136, in _build_single_dataset ds = self._as_dataset( File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/builder.py", line 1207, in _as_dataset dataset_kwargs = ArrowReader(cache_dir, self.info).read( File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/arrow_reader.py", line 239, in read return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory) File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/arrow_reader.py", line 260, in read_files pa_table = self._read_files(files, in_memory=in_memory) File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/arrow_reader.py", line 203, in _read_files pa_table = concat_tables(pa_tables) if len(pa_tables) != 1 else pa_tables[0] File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/table.py", line 1808, in concat_tables return ConcatenationTable.from_tables(tables, axis=axis) File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/table.py", line 1514, in from_tables return cls.from_blocks(blocks) File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/table.py", line 1427, in from_blocks table = cls._concat_blocks(blocks, axis=0) File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/table.py", line 1373, in _concat_blocks return pa.concat_tables(pa_tables, promote=True) File "pyarrow/table.pxi", line 5224, in pyarrow.lib.concat_tables File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Unable to merge: Field paragraph_anchors has incompatible types: list<: struct<start: uint32 not null, end: uint32 not null, qid: uint32, pageid: uint32, title: string not null> not null> vs list<item: struct<start: uint32, end: uint32, qid: uint32, pageid: uint32, title: string>> ``` This only happens when I load the `train` split, indicating that the size of the dataset is the deciding factor. ### Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("cyanic-selkie/wikianc-en", split="train") ``` ### Expected behavior The dataset should load normally without any errors. ### Environment info - `datasets` version: 2.10.1 - Platform: Linux-6.2.8-arch1-1-x86_64-with-glibc2.37 - Python version: 3.10.10 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5692/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5692/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5691
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5691/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5691/comments
https://api.github.com/repos/huggingface/datasets/issues/5691/events
https://github.com/huggingface/datasets/pull/5691
1,649,737,526
PR_kwDODunzps5NX08d
5,691
[docs] Compress data files
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "[Confirmed](https://huggingface.slack.com/archives/C02EMARJ65P/p1680541667004199) with the Hub team the file size limit for the Hugging Face Hub is 10MB :)", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006789 / 0.011353 (-0.004564) | 0.004935 / 0.011008 (-0.006073) | 0.096796 / 0.038508 (0.058288) | 0.032485 / 0.023109 (0.009376) | 0.335342 / 0.275898 (0.059444) | 0.354999 / 0.323480 (0.031519) | 0.005467 / 0.007986 (-0.002519) | 0.005267 / 0.004328 (0.000939) | 0.073988 / 0.004250 (0.069737) | 0.044402 / 0.037052 (0.007350) | 0.331156 / 0.258489 (0.072666) | 0.363595 / 0.293841 (0.069754) | 0.035301 / 0.128546 (-0.093245) | 0.012141 / 0.075646 (-0.063505) | 0.333164 / 0.419271 (-0.086107) | 0.048818 / 0.043533 (0.005286) | 0.331458 / 0.255139 (0.076319) | 0.343567 / 0.283200 (0.060367) | 0.094963 / 0.141683 (-0.046720) | 1.444383 / 1.452155 (-0.007772) | 1.520093 / 1.492716 (0.027377) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212311 / 0.018006 (0.194305) | 0.436413 / 0.000490 (0.435923) | 0.000333 / 0.000200 (0.000133) | 0.000057 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026670 / 0.037411 (-0.010742) | 0.105774 / 0.014526 (0.091248) | 0.115796 / 0.176557 (-0.060760) | 0.176504 / 0.737135 (-0.560631) | 0.121883 / 0.296338 (-0.174456) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.400783 / 0.215209 (0.185574) | 4.006608 / 2.077655 (1.928953) | 1.817659 / 1.504120 (0.313539) | 1.619777 / 1.541195 (0.078582) | 1.684247 / 1.468490 (0.215757) | 0.701116 / 4.584777 (-3.883661) | 3.684056 / 3.745712 (-0.061656) | 2.065258 / 5.269862 (-3.204603) | 1.425460 / 4.565676 (-3.140217) | 0.084519 / 0.424275 (-0.339757) | 0.011949 / 0.007607 (0.004342) | 0.496793 / 0.226044 (0.270749) | 4.978864 / 2.268929 (2.709935) | 2.303388 / 55.444624 (-53.141237) | 1.978341 / 6.876477 (-4.898135) | 2.055744 / 2.142072 (-0.086329) | 0.832022 / 4.805227 (-3.973206) | 0.164715 / 6.500664 (-6.335949) | 0.062701 / 0.075469 (-0.012768) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.178723 / 1.841788 (-0.663065) | 14.583986 / 8.074308 (6.509678) | 14.189402 / 10.191392 (3.998010) | 0.183867 / 0.680424 (-0.496557) | 0.017565 / 0.534201 (-0.516636) | 0.421345 / 0.579283 (-0.157938) | 0.420235 / 0.434364 (-0.014129) | 0.496758 / 0.540337 (-0.043580) | 0.591558 / 1.386936 (-0.795378) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007019 / 0.011353 (-0.004334) | 0.004996 / 0.011008 (-0.006012) | 0.073345 / 0.038508 (0.034836) | 0.033077 / 0.023109 (0.009968) | 0.335954 / 0.275898 (0.060056) | 0.372616 / 0.323480 (0.049136) | 0.005678 / 0.007986 (-0.002308) | 0.003906 / 0.004328 (-0.000423) | 0.072841 / 0.004250 (0.068591) | 0.046829 / 0.037052 (0.009777) | 0.335177 / 0.258489 (0.076688) | 0.382862 / 0.293841 (0.089021) | 0.038406 / 0.128546 (-0.090141) | 0.012110 / 0.075646 (-0.063536) | 0.085796 / 0.419271 (-0.333476) | 0.049896 / 0.043533 (0.006363) | 0.338232 / 0.255139 (0.083093) | 0.361054 / 0.283200 (0.077855) | 0.103171 / 0.141683 (-0.038512) | 1.556692 / 1.452155 (0.104538) | 1.540023 / 1.492716 (0.047306) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223705 / 0.018006 (0.205699) | 0.438771 / 0.000490 (0.438282) | 0.002838 / 0.000200 (0.002639) | 0.000081 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028423 / 0.037411 (-0.008988) | 0.110560 / 0.014526 (0.096035) | 0.121629 / 0.176557 (-0.054928) | 0.173638 / 0.737135 (-0.563498) | 0.127062 / 0.296338 (-0.169277) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.425806 / 0.215209 (0.210597) | 4.251051 / 2.077655 (2.173397) | 2.059735 / 1.504120 (0.555615) | 1.864886 / 1.541195 (0.323692) | 1.941553 / 1.468490 (0.473063) | 0.700084 / 4.584777 (-3.884693) | 3.753150 / 3.745712 (0.007438) | 3.218606 / 5.269862 (-2.051256) | 1.439648 / 4.565676 (-3.126028) | 0.085239 / 0.424275 (-0.339037) | 0.012026 / 0.007607 (0.004419) | 0.521564 / 0.226044 (0.295520) | 5.217902 / 2.268929 (2.948973) | 2.557831 / 55.444624 (-52.886793) | 2.240223 / 6.876477 (-4.636254) | 2.364664 / 2.142072 (0.222591) | 0.825884 / 4.805227 (-3.979343) | 0.167800 / 6.500664 (-6.332864) | 0.063552 / 0.075469 (-0.011917) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.255532 / 1.841788 (-0.586256) | 14.747783 / 8.074308 (6.673475) | 14.352263 / 10.191392 (4.160871) | 0.143659 / 0.680424 (-0.536765) | 0.017517 / 0.534201 (-0.516684) | 0.419863 / 0.579283 (-0.159421) | 0.416674 / 0.434364 (-0.017690) | 0.485694 / 0.540337 (-0.054643) | 0.584810 / 1.386936 (-0.802126) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#61db0e9c936bc67c18b37b0960e2f0bb1f8ffdcd \"CML watermark\")\n" ]
"2023-03-31T17:17:26"
"2023-04-19T13:37:32"
"2023-04-19T07:25:58"
MEMBER
null
This PR addresses the comments in #5687 about compressing text file extensions before uploading to the Hub. Also clarified what "too large" means based on the GitLFS [docs](https://docs.github.com/en/repositories/working-with-files/managing-large-files/about-git-large-file-storage).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5691/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5691/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5691", "html_url": "https://github.com/huggingface/datasets/pull/5691", "diff_url": "https://github.com/huggingface/datasets/pull/5691.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5691.patch", "merged_at": "2023-04-19T07:25:58" }
true
https://api.github.com/repos/huggingface/datasets/issues/5689
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5689/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5689/comments
https://api.github.com/repos/huggingface/datasets/issues/5689/events
https://github.com/huggingface/datasets/pull/5689
1,648,956,349
PR_kwDODunzps5NVMuI
5,689
Support streaming Beam datasets from HF GCS preprocessed data
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"wikipedia\", \"20220301.en\", split=\"train\", streaming=True); item = next(iter(ds)); item\r\nOut[2]: \r\n{'id': '12',\r\n 'url': 'https://en.wikipedia.org/wiki/Anarchism',\r\n 'title': 'Anarchism',\r\n 'text': 'Anarchism is a political philosophy and movement that is sceptical of authority and rejects all involuntary, coercive forms of hierarchy. Anarchism calls for the abolition of the state, which it holds to be unnecessary, undesirable, and harmful. As a historically left-wing movement, placed on the farthest left of the political spectrum, it is usually described alongside communalism and libertarian Marxism as the libertarian wing (libertarian socialism) of the socialist movement,...}\r\n```", "I love your example 🏴‍🅰️", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007859 / 0.011353 (-0.003493) | 0.005129 / 0.011008 (-0.005879) | 0.098070 / 0.038508 (0.059562) | 0.036500 / 0.023109 (0.013391) | 0.311575 / 0.275898 (0.035677) | 0.338351 / 0.323480 (0.014872) | 0.005962 / 0.007986 (-0.002024) | 0.004060 / 0.004328 (-0.000268) | 0.072970 / 0.004250 (0.068719) | 0.049289 / 0.037052 (0.012237) | 0.310303 / 0.258489 (0.051814) | 0.347449 / 0.293841 (0.053608) | 0.046912 / 0.128546 (-0.081634) | 0.011952 / 0.075646 (-0.063694) | 0.333600 / 0.419271 (-0.085671) | 0.052700 / 0.043533 (0.009167) | 0.325486 / 0.255139 (0.070347) | 0.326920 / 0.283200 (0.043720) | 0.107683 / 0.141683 (-0.034000) | 1.416679 / 1.452155 (-0.035476) | 1.502418 / 1.492716 (0.009702) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216520 / 0.018006 (0.198514) | 0.448450 / 0.000490 (0.447960) | 0.004213 / 0.000200 (0.004013) | 0.000082 / 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027081 / 0.037411 (-0.010331) | 0.110989 / 0.014526 (0.096463) | 0.116087 / 0.176557 (-0.060470) | 0.173771 / 0.737135 (-0.563364) | 0.121240 / 0.296338 (-0.175099) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.399938 / 0.215209 (0.184729) | 4.017665 / 2.077655 (1.940010) | 1.782327 / 1.504120 (0.278207) | 1.612955 / 1.541195 (0.071761) | 1.698839 / 1.468490 (0.230349) | 0.706702 / 4.584777 (-3.878075) | 4.533425 / 3.745712 (0.787713) | 2.102611 / 5.269862 (-3.167250) | 1.461429 / 4.565676 (-3.104248) | 0.085719 / 0.424275 (-0.338556) | 0.012104 / 0.007607 (0.004497) | 0.507397 / 0.226044 (0.281352) | 5.061572 / 2.268929 (2.792643) | 2.272106 / 55.444624 (-53.172518) | 1.935575 / 6.876477 (-4.940901) | 2.102541 / 2.142072 (-0.039532) | 0.838395 / 4.805227 (-3.966832) | 0.168573 / 6.500664 (-6.332091) | 0.064234 / 0.075469 (-0.011235) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.190077 / 1.841788 (-0.651710) | 15.765587 / 8.074308 (7.691279) | 14.694626 / 10.191392 (4.503234) | 0.142912 / 0.680424 (-0.537512) | 0.017669 / 0.534201 (-0.516532) | 0.421502 / 0.579283 (-0.157781) | 0.452732 / 0.434364 (0.018368) | 0.497480 / 0.540337 (-0.042857) | 0.586310 / 1.386936 (-0.800626) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007629 / 0.011353 (-0.003724) | 0.005330 / 0.011008 (-0.005679) | 0.076366 / 0.038508 (0.037858) | 0.034703 / 0.023109 (0.011593) | 0.356300 / 0.275898 (0.080402) | 0.392909 / 0.323480 (0.069429) | 0.005959 / 0.007986 (-0.002026) | 0.004140 / 0.004328 (-0.000188) | 0.075289 / 0.004250 (0.071039) | 0.047880 / 0.037052 (0.010828) | 0.357289 / 0.258489 (0.098800) | 0.404554 / 0.293841 (0.110714) | 0.037182 / 0.128546 (-0.091365) | 0.012266 / 0.075646 (-0.063380) | 0.088554 / 0.419271 (-0.330718) | 0.049698 / 0.043533 (0.006165) | 0.353453 / 0.255139 (0.098314) | 0.373252 / 0.283200 (0.090052) | 0.101892 / 0.141683 (-0.039791) | 1.481534 / 1.452155 (0.029380) | 1.553818 / 1.492716 (0.061102) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229891 / 0.018006 (0.211884) | 0.452444 / 0.000490 (0.451954) | 0.000434 / 0.000200 (0.000234) | 0.000058 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030170 / 0.037411 (-0.007241) | 0.115097 / 0.014526 (0.100571) | 0.122094 / 0.176557 (-0.054463) | 0.171352 / 0.737135 (-0.565784) | 0.128441 / 0.296338 (-0.167898) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.428347 / 0.215209 (0.213138) | 4.266243 / 2.077655 (2.188588) | 2.148327 / 1.504120 (0.644207) | 1.874141 / 1.541195 (0.332946) | 1.968737 / 1.468490 (0.500246) | 0.715320 / 4.584777 (-3.869457) | 4.166097 / 3.745712 (0.420384) | 2.169550 / 5.269862 (-3.100312) | 1.377441 / 4.565676 (-3.188236) | 0.086376 / 0.424275 (-0.337899) | 0.012018 / 0.007607 (0.004411) | 0.517433 / 0.226044 (0.291388) | 5.167327 / 2.268929 (2.898398) | 2.545822 / 55.444624 (-52.898803) | 2.241726 / 6.876477 (-4.634751) | 2.327220 / 2.142072 (0.185147) | 0.841618 / 4.805227 (-3.963609) | 0.169473 / 6.500664 (-6.331191) | 0.065505 / 0.075469 (-0.009964) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.270476 / 1.841788 (-0.571312) | 17.049885 / 8.074308 (8.975577) | 14.847615 / 10.191392 (4.656223) | 0.168671 / 0.680424 (-0.511753) | 0.017564 / 0.534201 (-0.516637) | 0.424780 / 0.579283 (-0.154503) | 0.517392 / 0.434364 (0.083028) | 0.561197 / 0.540337 (0.020859) | 0.697792 / 1.386936 (-0.689144) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ce06edf0afb70027ffbd3c2ddec5d28037e9bd31 \"CML watermark\")\n" ]
"2023-03-31T08:44:24"
"2023-04-12T05:57:55"
"2023-04-12T05:50:31"
MEMBER
null
This PR implements streaming Apache Beam datasets that are already preprocessed by us and stored in the HF Google Cloud Storage: - natural_questions - wiki40b - wikipedia This is done by streaming from the prepared Arrow files in HF Google Cloud Storage. This will fix their corresponding dataset viewers. Related to: - https://github.com/huggingface/datasets-server/pull/988#discussion_r1150767138 Related to: - https://huggingface.co/datasets/natural_questions/discussions/4 - https://huggingface.co/datasets/wiki40b/discussions/2 - https://huggingface.co/datasets/wikipedia/discussions/9 CC: @severo
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5689/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5689/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5689", "html_url": "https://github.com/huggingface/datasets/pull/5689", "diff_url": "https://github.com/huggingface/datasets/pull/5689.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5689.patch", "merged_at": "2023-04-12T05:50:30" }
true
https://api.github.com/repos/huggingface/datasets/issues/5690
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5690/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5690/comments
https://api.github.com/repos/huggingface/datasets/issues/5690/events
https://github.com/huggingface/datasets/issues/5690
1,649,289,883
I_kwDODunzps5iTiqb
5,690
raise AttributeError(f"No {package_name} attribute {name}") AttributeError: No huggingface_hub attribute hf_api
{ "login": "wccccp", "id": 55964850, "node_id": "MDQ6VXNlcjU1OTY0ODUw", "avatar_url": "https://avatars.githubusercontent.com/u/55964850?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wccccp", "html_url": "https://github.com/wccccp", "followers_url": "https://api.github.com/users/wccccp/followers", "following_url": "https://api.github.com/users/wccccp/following{/other_user}", "gists_url": "https://api.github.com/users/wccccp/gists{/gist_id}", "starred_url": "https://api.github.com/users/wccccp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wccccp/subscriptions", "organizations_url": "https://api.github.com/users/wccccp/orgs", "repos_url": "https://api.github.com/users/wccccp/repos", "events_url": "https://api.github.com/users/wccccp/events{/privacy}", "received_events_url": "https://api.github.com/users/wccccp/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi @wccccp, thanks for reporting. \r\nThat's weird since `huggingface_hub` _has_ a module called `hf_api` and you are using a recent version of it. \r\n\r\nWhich version of `datasets` are you using? And is it a bug that you experienced only recently? (cc @lhoestq can it be somehow related to the recent release of `datasets`?)\r\n\r\n~@wccccp what I can suggest you is to uninstall and reinstall completely huggingface_hub and datasets? My first guess is that there is a discrepancy somewhere in your setup 😕~", "@wccccp Actually I have also been able to reproduce the error so it's not an issue with your setup.\r\n\r\n@huggingface/datasets I found this issue quite weird. Is this a module that is not used very often?\r\nThe problematic line is [this one](https://github.com/huggingface/datasets/blame/c33e8ce68b5000988bf6b2e4bca27ffaa469acea/src/datasets/data_files.py#L476) where `huggingface_hub.hf_api.DatasetInfo` is used. `huggingface_hub` is imported [here](https://github.com/huggingface/datasets/blame/c33e8ce68b5000988bf6b2e4bca27ffaa469acea/src/datasets/data_files.py#L6) as `import huggingface_hub`. However since modules are lazy-loaded in `hfh` you need to explicitly import them (i.e. `import huggingface_hub.hf_api`).\r\n\r\nWhat's weird is that nothing has changed for months. Datasets code seems that it didn't change for 2 years when I git-blame this part. And lazy-loading was introduced 1 year ago in `huggingface_hub`. Could it be that `data_files.py` is a file almost never used?\r\n", "For context, I tried to run `import huggingface_hub; huggingface_hub.hf_api.DatasetInfo` in the terminal with different versions of `hfh` and I need to go back to `huggingface_hub==0.7.0` to make it work (latest is 0.13.3).", "Before the error happens at line 120 in `data_files.py`, `datasets.filesystems.hffilesystem` is imported at the top of `data_files.py` and this file does `from huggingface_hub.hf_api import DatasetInfo` - so `huggingface_hub.hf_api` is imported. Not sure how the error could happen, what version of `datasets` are you using @wccccp ?", "Closing due to inactivity." ]
"2023-03-31T08:22:22"
"2023-07-21T14:21:57"
"2023-07-21T14:21:57"
NONE
null
### Describe the bug rta.sh Traceback (most recent call last): File "run.py", line 7, in <module> import datasets File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/datasets/__init__.py", line 37, in <module> from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/datasets/builder.py", line 44, in <module> from .data_files import DataFilesDict, _sanitize_patterns File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/datasets/data_files.py", line 120, in <module> dataset_info: huggingface_hub.hf_api.DatasetInfo, File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/huggingface_hub/__init__.py", line 290, in __getattr__ raise AttributeError(f"No {package_name} attribute {name}") AttributeError: No huggingface_hub attribute hf_api ### Reproduction _No response_ ### Logs ```shell Traceback (most recent call last): File "run.py", line 7, in <module> import datasets File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/datasets/__init__.py", line 37, in <module> from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/datasets/builder.py", line 44, in <module> from .data_files import DataFilesDict, _sanitize_patterns File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/datasets/data_files.py", line 120, in <module> dataset_info: huggingface_hub.hf_api.DatasetInfo, File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/huggingface_hub/__init__.py", line 290, in __getattr__ raise AttributeError(f"No {package_name} attribute {name}") AttributeError: No huggingface_hub attribute hf_api ``` ### System info ```shell - huggingface_hub version: 0.13.2 - Platform: Linux-5.4.0-144-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - Running in iPython ?: No - Running in notebook ?: No - Running in Google Colab ?: No - Token path ?: /home/appuser/.cache/huggingface/token - Has saved token ?: False - Configured git credential helpers: - FastAI: N/A - Tensorflow: N/A - Torch: 1.7.1 - Jinja2: N/A - Graphviz: N/A - Pydot: N/A - Pillow: 9.3.0 - hf_transfer: N/A - ENDPOINT: https://huggingface.co - HUGGINGFACE_HUB_CACHE: /home/appuser/.cache/huggingface/hub - HUGGINGFACE_ASSETS_CACHE: /home/appuser/.cache/huggingface/assets - HF_TOKEN_PATH: /home/appuser/.cache/huggingface/token - HF_HUB_OFFLINE: False - HF_HUB_DISABLE_TELEMETRY: False - HF_HUB_DISABLE_PROGRESS_BARS: None - HF_HUB_DISABLE_SYMLINKS_WARNING: False - HF_HUB_DISABLE_IMPLICIT_TOKEN: False ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5690/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5690/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5688
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5688/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5688/comments
https://api.github.com/repos/huggingface/datasets/issues/5688/events
https://github.com/huggingface/datasets/issues/5688
1,648,463,504
I_kwDODunzps5iQY6Q
5,688
Wikipedia download_and_prepare for GCS
{ "login": "adrianfagerland", "id": 25522531, "node_id": "MDQ6VXNlcjI1NTIyNTMx", "avatar_url": "https://avatars.githubusercontent.com/u/25522531?v=4", "gravatar_id": "", "url": "https://api.github.com/users/adrianfagerland", "html_url": "https://github.com/adrianfagerland", "followers_url": "https://api.github.com/users/adrianfagerland/followers", "following_url": "https://api.github.com/users/adrianfagerland/following{/other_user}", "gists_url": "https://api.github.com/users/adrianfagerland/gists{/gist_id}", "starred_url": "https://api.github.com/users/adrianfagerland/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/adrianfagerland/subscriptions", "organizations_url": "https://api.github.com/users/adrianfagerland/orgs", "repos_url": "https://api.github.com/users/adrianfagerland/repos", "events_url": "https://api.github.com/users/adrianfagerland/events{/privacy}", "received_events_url": "https://api.github.com/users/adrianfagerland/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @adrianfagerland, thanks for reporting.\r\n\r\nPlease note that \"wikipedia\" is a special dataset, with an Apache Beam builder: https://beam.apache.org/\r\nYou can find more info about Beam datasets in our docs: https://huggingface.co/docs/datasets/beam\r\n\r\nIt was implemented to be run in parallel processing, using one of the distributed back-ends supported by Apache Beam: https://beam.apache.org/get-started/beam-overview/#apache-beam-pipeline-runners\r\n\r\nThat is, you are trying to process the source wikipedia data on your machine (not distributed) when passing `beam_runner=\"DirectRunner\"`.\r\n\r\nAs documented in the wikipedia dataset page (https://huggingface.co/datasets/wikipedia):\r\n\r\n Some subsets of Wikipedia have already been processed by HuggingFace, and you can load them just with:\r\n \r\n from datasets import load_dataset\r\n \r\n load_dataset(\"wikipedia\", \"20220301.en\")\r\n\r\n The list of pre-processed subsets is:\r\n - \"20220301.de\"\r\n - \"20220301.en\"\r\n - \"20220301.fr\"\r\n - \"20220301.frr\"\r\n - \"20220301.it\"\r\n - \"20220301.simple\"\r\n\r\nTo download the available processed data (in Arrow format):\r\n```python\r\nbuilder = datasets.load_dataset_builder(\"wikipedia\", \"20220301.en\")\r\nbuilder.download_and_prepare(your_path)\r\n```", "When running this using :\r\n```\r\nimport datasets\r\nfrom apache_beam.options.pipeline_options import PipelineOptions\r\nfrom gcsfs import GCSFileSystem\r\n\r\nstorage_options = {\"project\":\"tdt4310\", \"token\":\"cloud\"}\r\nfs = GCSFileSystem(**storage_options)\r\n\r\noutput_dir = \"gcs://quiz_transformer/\"\r\nbeam_options = PipelineOptions(\r\n region=\"europe-west4\",\r\n project=\"tdt4310\",\r\n temp_location=output_dir+\"tmp/\")\r\n\r\n\r\nbuilder = datasets.load_dataset_builder(\"wikipedia\", \"20220301.en\", beam_runner=\"dataflow\", beam_options=beam_options)\r\nbuilder.download_and_prepare(\r\n output_dir, storage_options=storage_options, file_format=\"parquet\")\r\n```\r\nI now get this error:\r\n```\r\nraise FileNotFoundError(f\"Couldn't find file at {url}\")\r\nFileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/enwiki/20220301/dumpstatus.json\r\nDownloading data files: 0%| | 0/1 [00:00<?, ?it/s]\r\n```\r\n\r\nI get the same error for this:\r\n```\r\nimport datasets\r\nfrom gcsfs import GCSFileSystem\r\n\r\nstorage_options = {\"project\":\"tdt4310\", \"token\":\"cloud\"}\r\nfs = GCSFileSystem(**storage_options)\r\n\r\noutput_dir = \"gcs://quiz_transformer/\"\r\nbuilder = datasets.load_dataset_builder(\"wikipedia\", \"20220301.en\")\r\nbuilder.download_and_prepare(\r\n output_dir, storage_options=storage_options, file_format=\"parquet\")\r\n```\r\n\r\n\r\n\r\n", "`wikipedia` is no longer a Beam dataset, so the above code should work now.\r\n\r\nPS: You can use [these files](https://huggingface.co/datasets/wikipedia/tree/main/data/20220301.en) (or a newer dump at https://huggingface.co/datasets/wikimedia/wikipedia/tree/main/20231101.en) instead of generating the Parquet version yourself" ]
"2023-03-30T23:43:22"
"2024-03-15T15:59:18"
"2024-03-15T15:59:18"
NONE
null
### Describe the bug I am unable to download the wikipedia dataset onto GCS. When I run the script provided the memory firstly gets eaten up, then it crashes. I tried running this on a VM with 128GB RAM and all I got was a two empty files: _data_builder.lock_, _data.incomplete/beam-temp-wikipedia-train-1ab2039acf3611ed87a9893475de0093_ I have troubleshot this for two straight days now, but I am just unable to get the dataset into storage. ### Steps to reproduce the bug Run this and insert a path: ``` import datasets builder = datasets.load_dataset_builder( "wikipedia", language="en", date="20230320", beam_runner="DirectRunner") builder.download_and_prepare({path}, file_format="parquet") ``` This is where the problem of it eating RAM occurs. I have also tried several versions of this, based on the docs: ``` import gcsfs import datasets storage_options = {"project": "tdt4310", "token": "cloud"} fs = gcsfs.GCSFileSystem(**storage_options) output_dir = "gcs://wikipediadata/" builder = datasets.load_dataset_builder( "wikipedia", date="20230320", language="en", beam_runner="DirectRunner") builder.download_and_prepare( output_dir, storage_options=storage_options, file_format="parquet") ``` The error message that is received here is: > ValueError: Unable to get filesystem from specified path, please use the correct path or ensure the required dependency is installed, e.g., pip install apache-beam[gcp]. Path specified: gcs://wikipediadata/wikipedia-train [while running 'train/Save to parquet/Write/WriteImpl/InitializeWrite'] I have ran `pip install apache-beam[gcp]` ### Expected behavior The wikipedia data loaded into GCS Everything worked when testing with a smaller demo dataset found somewhere in the docs ### Environment info Newest published version of datasets. Python 3.9. Also tested with Python 3.7. 128GB RAM Google Cloud VM instance.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5688/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5688/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5687
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5687/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5687/comments
https://api.github.com/repos/huggingface/datasets/issues/5687/events
https://github.com/huggingface/datasets/issues/5687
1,647,009,018
I_kwDODunzps5iK1z6
5,687
Document to compress data files before uploading
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
[ "Great idea!\r\n\r\nShould we also take this opportunity to include some audio/image file formats? Currently, it still reads very text heavy. Something like:\r\n\r\n> We support many text, audio, and image data extensions such as `.zip`, `.rar`, `.mp3`, and `.jpg` among many others. For data extensions like `.csv`, `.json`, `.jsonl`, and `txt`, we recommend compressing them before uploading to the Hub. These file extensions are not tracked by Git LFS by default, and if they're too large, they will not be committed and uploaded. Take a look at the `.gitattributes` file in your repository for a complete list of supported file extensions.", "Hi @stevhliu, thanks for your suggestion.\r\n\r\nI agree it is a good opportunity to mention that audio/image file formats are also supported.\r\n\r\nNit:\r\nI would not mention .zip, .rar after \"text, audio, and image data extensions\". Those are \"compression\" extensions and not \"text, audio, and image data extensions\".\r\n\r\nWhat about something similar to:\r\n> We support many text, audio, and image data extensions such as `.csv`, `.mp3`, and `.jpg` among many others. For text data extensions like `.csv`, `.json`, `.jsonl`, and `.txt`, we recommend compressing them before uploading to the Hub (to `.zip` or `.gz` file extension for example). \r\n>\r\n> Note that text file extensions are not tracked by Git LFS by default, and if they're too large, they will not be committed and uploaded. Take a look at the `.gitattributes` file in your repository for a complete list of tracked file extensions by default.\r\n\r\nNote that for compressions I have mentioned:\r\n- gz, to compress individual files\r\n- zip, to compress and archive multiple files; zip is preferred rather than tar because it supports streaming out of the box", "Perfect, thanks for making the distinction between compression and data extensions!" ]
"2023-03-30T06:41:07"
"2023-04-19T07:25:59"
"2023-04-19T07:25:59"
MEMBER
null
In our docs to [Share a dataset to the Hub](https://huggingface.co/docs/datasets/upload_dataset), we tell users to upload directly their data files, like CSV, JSON, JSON-Lines, text,... However, these extensions are not tracked by Git LFS by default, as they are not in the `.giattributes` file. Therefore, if they are too large, Git will fail to commit/upload them. I think for those file extensions (.csv, .json, .jsonl, .txt), we should better recommend to **compress** their data files (using ZIP for example) before uploading them to the Hub. - Compressed files are tracked by Git LFS in our default `.gitattributes` file What do you think? CC: @stevhliu See related issue: - https://huggingface.co/datasets/tcor0005/langchain-docs-400-chunksize/discussions/1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5687/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5687/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5686
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5686/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5686/comments
https://api.github.com/repos/huggingface/datasets/issues/5686/events
https://github.com/huggingface/datasets/pull/5686
1,646,308,228
PR_kwDODunzps5NMXdu
5,686
set dev version
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5686). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008460 / 0.011353 (-0.002893) | 0.006114 / 0.011008 (-0.004894) | 0.121496 / 0.038508 (0.082987) | 0.035030 / 0.023109 (0.011920) | 0.397778 / 0.275898 (0.121880) | 0.429020 / 0.323480 (0.105540) | 0.007811 / 0.007986 (-0.000174) | 0.006269 / 0.004328 (0.001940) | 0.098895 / 0.004250 (0.094645) | 0.045407 / 0.037052 (0.008355) | 0.413679 / 0.258489 (0.155189) | 0.437491 / 0.293841 (0.143650) | 0.053207 / 0.128546 (-0.075339) | 0.018471 / 0.075646 (-0.057175) | 0.414800 / 0.419271 (-0.004472) | 0.060864 / 0.043533 (0.017332) | 0.398501 / 0.255139 (0.143362) | 0.421142 / 0.283200 (0.137942) | 0.114908 / 0.141683 (-0.026775) | 1.678630 / 1.452155 (0.226475) | 1.782313 / 1.492716 (0.289596) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.280783 / 0.018006 (0.262777) | 0.591573 / 0.000490 (0.591083) | 0.005797 / 0.000200 (0.005597) | 0.000115 / 0.000054 (0.000060) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030431 / 0.037411 (-0.006981) | 0.117342 / 0.014526 (0.102816) | 0.128456 / 0.176557 (-0.048101) | 0.198782 / 0.737135 (-0.538354) | 0.128501 / 0.296338 (-0.167838) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.603073 / 0.215209 (0.387864) | 6.101354 / 2.077655 (4.023699) | 2.527812 / 1.504120 (1.023692) | 2.101468 / 1.541195 (0.560273) | 2.092813 / 1.468490 (0.624323) | 1.182150 / 4.584777 (-3.402627) | 5.389278 / 3.745712 (1.643566) | 5.041001 / 5.269862 (-0.228860) | 2.650581 / 4.565676 (-1.915095) | 0.138761 / 0.424275 (-0.285514) | 0.014209 / 0.007607 (0.006602) | 0.748596 / 0.226044 (0.522552) | 7.373937 / 2.268929 (5.105008) | 3.245882 / 55.444624 (-52.198742) | 2.523569 / 6.876477 (-4.352908) | 2.581343 / 2.142072 (0.439270) | 1.340436 / 4.805227 (-3.464791) | 0.241388 / 6.500664 (-6.259276) | 0.076634 / 0.075469 (0.001164) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.480237 / 1.841788 (-0.361551) | 16.781338 / 8.074308 (8.707030) | 19.735028 / 10.191392 (9.543636) | 0.256872 / 0.680424 (-0.423551) | 0.029211 / 0.534201 (-0.504990) | 0.503292 / 0.579283 (-0.075991) | 0.584510 / 0.434364 (0.150146) | 0.580293 / 0.540337 (0.039955) | 0.678863 / 1.386936 (-0.708073) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009972 / 0.011353 (-0.001381) | 0.006107 / 0.011008 (-0.004902) | 0.096188 / 0.038508 (0.057680) | 0.033320 / 0.023109 (0.010210) | 0.420789 / 0.275898 (0.144891) | 0.460488 / 0.323480 (0.137008) | 0.006492 / 0.007986 (-0.001493) | 0.005325 / 0.004328 (0.000997) | 0.094974 / 0.004250 (0.090723) | 0.047708 / 0.037052 (0.010655) | 0.426689 / 0.258489 (0.168200) | 0.476440 / 0.293841 (0.182599) | 0.052776 / 0.128546 (-0.075770) | 0.018779 / 0.075646 (-0.056868) | 0.119598 / 0.419271 (-0.299673) | 0.061800 / 0.043533 (0.018267) | 0.421305 / 0.255139 (0.166166) | 0.441125 / 0.283200 (0.157925) | 0.114221 / 0.141683 (-0.027462) | 1.712681 / 1.452155 (0.260526) | 1.852316 / 1.492716 (0.359600) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.272412 / 0.018006 (0.254405) | 0.583996 / 0.000490 (0.583506) | 0.000505 / 0.000200 (0.000305) | 0.000077 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029553 / 0.037411 (-0.007858) | 0.124921 / 0.014526 (0.110395) | 0.133338 / 0.176557 (-0.043218) | 0.193811 / 0.737135 (-0.543325) | 0.147973 / 0.296338 (-0.148365) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.595241 / 0.215209 (0.380032) | 6.012015 / 2.077655 (3.934360) | 2.611295 / 1.504120 (1.107175) | 2.290127 / 1.541195 (0.748932) | 2.300366 / 1.468490 (0.831876) | 1.197602 / 4.584777 (-3.387175) | 5.439064 / 3.745712 (1.693352) | 2.906088 / 5.269862 (-2.363773) | 1.919183 / 4.565676 (-2.646493) | 0.132166 / 0.424275 (-0.292109) | 0.014544 / 0.007607 (0.006937) | 0.726377 / 0.226044 (0.500333) | 7.361023 / 2.268929 (5.092094) | 3.289266 / 55.444624 (-52.155358) | 2.635570 / 6.876477 (-4.240907) | 2.595691 / 2.142072 (0.453619) | 1.329458 / 4.805227 (-3.475769) | 0.239419 / 6.500664 (-6.261245) | 0.076316 / 0.075469 (0.000847) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.547616 / 1.841788 (-0.294172) | 17.374315 / 8.074308 (9.300007) | 20.216275 / 10.191392 (10.024883) | 0.252102 / 0.680424 (-0.428322) | 0.027535 / 0.534201 (-0.506665) | 0.524618 / 0.579283 (-0.054666) | 0.596803 / 0.434364 (0.162439) | 0.652632 / 0.540337 (0.112294) | 0.762272 / 1.386936 (-0.624664) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8c7d4b2f981f8cf639dcbd80f40a41aa5b1693c6 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008236 / 0.011353 (-0.003117) | 0.006186 / 0.011008 (-0.004822) | 0.117852 / 0.038508 (0.079344) | 0.034711 / 0.023109 (0.011602) | 0.447564 / 0.275898 (0.171666) | 0.438727 / 0.323480 (0.115247) | 0.006576 / 0.007986 (-0.001410) | 0.005903 / 0.004328 (0.001574) | 0.094309 / 0.004250 (0.090059) | 0.042760 / 0.037052 (0.005708) | 0.393269 / 0.258489 (0.134780) | 0.438061 / 0.293841 (0.144220) | 0.059029 / 0.128546 (-0.069517) | 0.020296 / 0.075646 (-0.055350) | 0.412057 / 0.419271 (-0.007215) | 0.059808 / 0.043533 (0.016275) | 0.407243 / 0.255139 (0.152104) | 0.414290 / 0.283200 (0.131090) | 0.107701 / 0.141683 (-0.033981) | 1.671522 / 1.452155 (0.219367) | 1.775055 / 1.492716 (0.282338) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.275242 / 0.018006 (0.257236) | 0.599698 / 0.000490 (0.599208) | 0.001289 / 0.000200 (0.001089) | 0.000101 / 0.000054 (0.000046) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029579 / 0.037411 (-0.007832) | 0.127249 / 0.014526 (0.112723) | 0.137431 / 0.176557 (-0.039126) | 0.220330 / 0.737135 (-0.516805) | 0.133540 / 0.296338 (-0.162798) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.571989 / 0.215209 (0.356780) | 5.931503 / 2.077655 (3.853848) | 2.526646 / 1.504120 (1.022527) | 2.189476 / 1.541195 (0.648281) | 2.151935 / 1.468490 (0.683444) | 1.242440 / 4.584777 (-3.342337) | 5.599675 / 3.745712 (1.853963) | 3.242035 / 5.269862 (-2.027826) | 2.368361 / 4.565676 (-2.197315) | 0.145659 / 0.424275 (-0.278616) | 0.013813 / 0.007607 (0.006206) | 0.782495 / 0.226044 (0.556451) | 7.861619 / 2.268929 (5.592690) | 3.241001 / 55.444624 (-52.203623) | 2.611025 / 6.876477 (-4.265452) | 2.667263 / 2.142072 (0.525191) | 1.429992 / 4.805227 (-3.375235) | 0.243008 / 6.500664 (-6.257656) | 0.083686 / 0.075469 (0.008217) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.565526 / 1.841788 (-0.276262) | 18.260815 / 8.074308 (10.186507) | 22.586133 / 10.191392 (12.394741) | 0.231864 / 0.680424 (-0.448559) | 0.030877 / 0.534201 (-0.503324) | 0.569726 / 0.579283 (-0.009557) | 0.678638 / 0.434364 (0.244274) | 0.611810 / 0.540337 (0.071472) | 0.718771 / 1.386936 (-0.668165) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009398 / 0.011353 (-0.001955) | 0.006452 / 0.011008 (-0.004556) | 0.103352 / 0.038508 (0.064844) | 0.034773 / 0.023109 (0.011664) | 0.523782 / 0.275898 (0.247884) | 0.523554 / 0.323480 (0.200074) | 0.006990 / 0.007986 (-0.000996) | 0.004994 / 0.004328 (0.000666) | 0.102199 / 0.004250 (0.097949) | 0.050087 / 0.037052 (0.013035) | 0.496662 / 0.258489 (0.238173) | 0.563130 / 0.293841 (0.269289) | 0.052851 / 0.128546 (-0.075695) | 0.019824 / 0.075646 (-0.055822) | 0.122657 / 0.419271 (-0.296614) | 0.057714 / 0.043533 (0.014181) | 0.470502 / 0.255139 (0.215363) | 0.518908 / 0.283200 (0.235708) | 0.114374 / 0.141683 (-0.027309) | 1.795918 / 1.452155 (0.343763) | 1.957461 / 1.492716 (0.464744) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.303921 / 0.018006 (0.285915) | 0.584406 / 0.000490 (0.583916) | 0.000444 / 0.000200 (0.000244) | 0.000096 / 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032254 / 0.037411 (-0.005158) | 0.129966 / 0.014526 (0.115440) | 0.151000 / 0.176557 (-0.025557) | 0.234060 / 0.737135 (-0.503076) | 0.149444 / 0.296338 (-0.146895) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.666627 / 0.215209 (0.451418) | 7.054701 / 2.077655 (4.977046) | 2.836895 / 1.504120 (1.332775) | 2.561994 / 1.541195 (1.020799) | 2.672460 / 1.468490 (1.203970) | 1.411929 / 4.584777 (-3.172848) | 6.026918 / 3.745712 (2.281206) | 3.341745 / 5.269862 (-1.928116) | 2.280317 / 4.565676 (-2.285359) | 0.156635 / 0.424275 (-0.267641) | 0.014256 / 0.007607 (0.006649) | 0.804830 / 0.226044 (0.578786) | 8.106960 / 2.268929 (5.838031) | 3.597452 / 55.444624 (-51.847172) | 3.002847 / 6.876477 (-3.873630) | 2.931160 / 2.142072 (0.789088) | 1.484172 / 4.805227 (-3.321056) | 0.254166 / 6.500664 (-6.246498) | 0.080554 / 0.075469 (0.005085) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.809909 / 1.841788 (-0.031879) | 18.988994 / 8.074308 (10.914686) | 23.153442 / 10.191392 (12.962050) | 0.250554 / 0.680424 (-0.429870) | 0.048677 / 0.534201 (-0.485524) | 0.574109 / 0.579283 (-0.005174) | 0.640917 / 0.434364 (0.206553) | 0.725215 / 0.540337 (0.184878) | 0.878234 / 1.386936 (-0.508702) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e3667d6e17d68503469c8e88ec344b7cccfa2346 \"CML watermark\")\n" ]
"2023-03-29T18:24:13"
"2023-03-29T18:33:49"
"2023-03-29T18:24:22"
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5686/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5686/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5686", "html_url": "https://github.com/huggingface/datasets/pull/5686", "diff_url": "https://github.com/huggingface/datasets/pull/5686.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5686.patch", "merged_at": "2023-03-29T18:24:22" }
true
https://api.github.com/repos/huggingface/datasets/issues/5685
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5685/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5685/comments
https://api.github.com/repos/huggingface/datasets/issues/5685/events
https://github.com/huggingface/datasets/issues/5685
1,646,048,667
I_kwDODunzps5iHLWb
5,685
Broken Image render on the hub website
{ "login": "FrancescoSaverioZuppichini", "id": 15908060, "node_id": "MDQ6VXNlcjE1OTA4MDYw", "avatar_url": "https://avatars.githubusercontent.com/u/15908060?v=4", "gravatar_id": "", "url": "https://api.github.com/users/FrancescoSaverioZuppichini", "html_url": "https://github.com/FrancescoSaverioZuppichini", "followers_url": "https://api.github.com/users/FrancescoSaverioZuppichini/followers", "following_url": "https://api.github.com/users/FrancescoSaverioZuppichini/following{/other_user}", "gists_url": "https://api.github.com/users/FrancescoSaverioZuppichini/gists{/gist_id}", "starred_url": "https://api.github.com/users/FrancescoSaverioZuppichini/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FrancescoSaverioZuppichini/subscriptions", "organizations_url": "https://api.github.com/users/FrancescoSaverioZuppichini/orgs", "repos_url": "https://api.github.com/users/FrancescoSaverioZuppichini/repos", "events_url": "https://api.github.com/users/FrancescoSaverioZuppichini/events{/privacy}", "received_events_url": "https://api.github.com/users/FrancescoSaverioZuppichini/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi! \r\n\r\nYou can fix the viewer by adding the `dataset_info` YAML field deleted in https://huggingface.co/datasets/Francesco/cell-towers/commit/b95b59ddd91ebe9c12920f0efe0ed415cd0d4298 back to the metadata section of the card. \r\n\r\nTo avoid this issue in the feature, you can use `huggingface_hub`'s [RepoCard](https://huggingface.co/docs/huggingface_hub/package_reference/cards) API to update the dataset card instead of `upload_file`:\r\n```python\r\nfrom huggingface_hub import DatasetCard\r\n# Load card\r\ncard = DatasetCard.load(\"<namespace>/<repo_id>\")\r\n# Modify card content\r\ncard.content = ...\r\n# Push card to the Hub\r\ncard.push_to_hub(\"<namespace>/<repo_id>\")\r\n```\r\n\r\nHowever, the best solution would be to use the features info stored in the header of the Parquet shards generated with `push_to_hub` on the viewer side to avoid unexpected issues such as this one. This shouldn't be too hard to address.", "Thanks for reporting @FrancescoSaverioZuppichini.\r\n\r\nFor future issues with your specific dataset, you can use its \"Community\" tab to start a conversation: https://huggingface.co/datasets/Francesco/cell-towers/discussions/new", "Thanks @albertvillanova , @mariosasko I was not aware of this requirement from the doc (must have skipped :sweat_smile: )\r\n\r\nConfirmed, adding back `dataset_info` fixed the issu" ]
"2023-03-29T15:25:30"
"2023-03-30T07:54:25"
"2023-03-30T07:54:25"
NONE
null
### Describe the bug Hi :wave: Not sure if this is the right place to ask, but I am trying to load a huge amount of datasets on the hub (:partying_face: ) but I am facing a little issue with the `image` type ![image](https://user-images.githubusercontent.com/15908060/228587875-427a37f1-3a31-4e17-8bbe-0f759003910d.png) See this [dataset](https://huggingface.co/datasets/Francesco/cell-towers), basically for some reason the first image has numerical bytes inside, not sure if that is okay, but the image render feature **doesn't work** So the dataset is stored in the following way ```python builder.download_and_prepare(output_dir=str(output_dir)) ds = builder.as_dataset(split="train") # [NOTE] no idea how to push it from the builder folder ds.push_to_hub(repo_id=repo_id) builder.as_dataset(split="validation").push_to_hub(repo_id=repo_id) ds = builder.as_dataset(split="test") ds.push_to_hub(repo_id=repo_id) ``` The build is this class ```python class COCOLikeDatasetBuilder(datasets.GeneratorBasedBuilder): VERSION = datasets.Version("1.0.0") def _info(self): features = datasets.Features( { "image_id": datasets.Value("int64"), "image": datasets.Image(), "width": datasets.Value("int32"), "height": datasets.Value("int32"), "objects": datasets.Sequence( { "id": datasets.Value("int64"), "area": datasets.Value("int64"), "bbox": datasets.Sequence( datasets.Value("float32"), length=4 ), "category": datasets.ClassLabel(names=categories), } ), } ) return datasets.DatasetInfo( description=description, features=features, homepage=homepage, license=license, citation=citation, ) def _split_generators(self, dl_manager): archive = dl_manager.download(url) return [ datasets.SplitGenerator( name=datasets.Split.TRAIN, gen_kwargs={ "annotation_file_path": "train/_annotations.coco.json", "files": dl_manager.iter_archive(archive), }, ), datasets.SplitGenerator( name=datasets.Split.VALIDATION, gen_kwargs={ "annotation_file_path": "test/_annotations.coco.json", "files": dl_manager.iter_archive(archive), }, ), datasets.SplitGenerator( name=datasets.Split.TEST, gen_kwargs={ "annotation_file_path": "valid/_annotations.coco.json", "files": dl_manager.iter_archive(archive), }, ), ] def _generate_examples(self, annotation_file_path, files): def process_annot(annot, category_id_to_category): return { "id": annot["id"], "area": annot["area"], "bbox": annot["bbox"], "category": category_id_to_category[annot["category_id"]], } image_id_to_image = {} idx = 0 # This loop relies on the ordering of the files in the archive: # Annotation files come first, then the images. for path, f in files: file_name = os.path.basename(path) if annotation_file_path in path: annotations = json.load(f) category_id_to_category = { category["id"]: category["name"] for category in annotations["categories"] } print(category_id_to_category) image_id_to_annotations = collections.defaultdict(list) for annot in annotations["annotations"]: image_id_to_annotations[annot["image_id"]].append(annot) image_id_to_image = { annot["file_name"]: annot for annot in annotations["images"] } elif file_name in image_id_to_image: image = image_id_to_image[file_name] objects = [ process_annot(annot, category_id_to_category) for annot in image_id_to_annotations[image["id"]] ] print(file_name) yield idx, { "image_id": image["id"], "image": {"path": path, "bytes": f.read()}, "width": image["width"], "height": image["height"], "objects": objects, } idx += 1 ``` Basically, I want to add to the hub every dataset I come across on coco format Thanks Fra ### Steps to reproduce the bug In this case, you can just navigate on the [dataset](https://huggingface.co/datasets/Francesco/cell-towers) ### Expected behavior I was expecting the image rendering feature to work ### Environment info Not a lot to share, I am using `datasets` from a fresh venv
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5685/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5685/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5684
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5684/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5684/comments
https://api.github.com/repos/huggingface/datasets/issues/5684/events
https://github.com/huggingface/datasets/pull/5684
1,646,013,226
PR_kwDODunzps5NLXWm
5,684
Release: 2.11.0
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007017 / 0.011353 (-0.004335) | 0.004917 / 0.011008 (-0.006091) | 0.098391 / 0.038508 (0.059883) | 0.032677 / 0.023109 (0.009568) | 0.312126 / 0.275898 (0.036227) | 0.352477 / 0.323480 (0.028998) | 0.005960 / 0.007986 (-0.002025) | 0.003801 / 0.004328 (-0.000528) | 0.073916 / 0.004250 (0.069666) | 0.045610 / 0.037052 (0.008557) | 0.319626 / 0.258489 (0.061137) | 0.370575 / 0.293841 (0.076734) | 0.035888 / 0.128546 (-0.092658) | 0.012012 / 0.075646 (-0.063635) | 0.338290 / 0.419271 (-0.080982) | 0.049452 / 0.043533 (0.005919) | 0.301226 / 0.255139 (0.046087) | 0.336744 / 0.283200 (0.053545) | 0.100835 / 0.141683 (-0.040847) | 1.500008 / 1.452155 (0.047853) | 1.566757 / 1.492716 (0.074041) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220668 / 0.018006 (0.202662) | 0.449273 / 0.000490 (0.448784) | 0.003861 / 0.000200 (0.003661) | 0.000126 / 0.000054 (0.000072) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026847 / 0.037411 (-0.010565) | 0.105916 / 0.014526 (0.091390) | 0.116245 / 0.176557 (-0.060312) | 0.172617 / 0.737135 (-0.564519) | 0.122846 / 0.296338 (-0.173492) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417906 / 0.215209 (0.202697) | 4.169092 / 2.077655 (2.091437) | 1.934439 / 1.504120 (0.430319) | 1.735718 / 1.541195 (0.194523) | 1.828205 / 1.468490 (0.359715) | 0.697446 / 4.584777 (-3.887331) | 3.802830 / 3.745712 (0.057118) | 3.686464 / 5.269862 (-1.583398) | 1.863924 / 4.565676 (-2.701752) | 0.086520 / 0.424275 (-0.337755) | 0.012101 / 0.007607 (0.004493) | 0.521252 / 0.226044 (0.295208) | 5.200937 / 2.268929 (2.932009) | 2.414290 / 55.444624 (-53.030334) | 2.070890 / 6.876477 (-4.805587) | 2.237693 / 2.142072 (0.095621) | 0.843417 / 4.805227 (-3.961811) | 0.167856 / 6.500664 (-6.332809) | 0.064997 / 0.075469 (-0.010472) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.212334 / 1.841788 (-0.629454) | 14.710632 / 8.074308 (6.636324) | 14.877489 / 10.191392 (4.686097) | 0.151268 / 0.680424 (-0.529156) | 0.018663 / 0.534201 (-0.515538) | 0.429678 / 0.579283 (-0.149605) | 0.425054 / 0.434364 (-0.009310) | 0.502804 / 0.540337 (-0.037533) | 0.587932 / 1.386936 (-0.799004) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007462 / 0.011353 (-0.003891) | 0.005307 / 0.011008 (-0.005701) | 0.074309 / 0.038508 (0.035801) | 0.033437 / 0.023109 (0.010328) | 0.355087 / 0.275898 (0.079189) | 0.391417 / 0.323480 (0.067937) | 0.005904 / 0.007986 (-0.002082) | 0.004062 / 0.004328 (-0.000266) | 0.073801 / 0.004250 (0.069550) | 0.048503 / 0.037052 (0.011451) | 0.359547 / 0.258489 (0.101058) | 0.405325 / 0.293841 (0.111484) | 0.036615 / 0.128546 (-0.091931) | 0.012185 / 0.075646 (-0.063461) | 0.086829 / 0.419271 (-0.332443) | 0.049101 / 0.043533 (0.005569) | 0.334259 / 0.255139 (0.079120) | 0.376317 / 0.283200 (0.093117) | 0.099935 / 0.141683 (-0.041748) | 1.483166 / 1.452155 (0.031011) | 1.569092 / 1.492716 (0.076375) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207528 / 0.018006 (0.189521) | 0.437473 / 0.000490 (0.436983) | 0.004915 / 0.000200 (0.004715) | 0.000097 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028632 / 0.037411 (-0.008780) | 0.111782 / 0.014526 (0.097256) | 0.122545 / 0.176557 (-0.054011) | 0.171191 / 0.737135 (-0.565945) | 0.128999 / 0.296338 (-0.167339) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.424422 / 0.215209 (0.209213) | 4.239488 / 2.077655 (2.161833) | 2.027969 / 1.504120 (0.523849) | 1.800667 / 1.541195 (0.259473) | 1.898701 / 1.468490 (0.430211) | 0.711453 / 4.584777 (-3.873324) | 3.766696 / 3.745712 (0.020984) | 2.107530 / 5.269862 (-3.162331) | 1.347137 / 4.565676 (-3.218540) | 0.086823 / 0.424275 (-0.337452) | 0.012137 / 0.007607 (0.004530) | 0.523143 / 0.226044 (0.297099) | 5.273434 / 2.268929 (3.004505) | 2.545463 / 55.444624 (-52.899161) | 2.246683 / 6.876477 (-4.629793) | 2.296862 / 2.142072 (0.154789) | 0.855690 / 4.805227 (-3.949538) | 0.168526 / 6.500664 (-6.332138) | 0.063392 / 0.075469 (-0.012078) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.248926 / 1.841788 (-0.592862) | 14.676308 / 8.074308 (6.602000) | 14.524364 / 10.191392 (4.332972) | 0.184138 / 0.680424 (-0.496286) | 0.017259 / 0.534201 (-0.516942) | 0.433875 / 0.579283 (-0.145408) | 0.416787 / 0.434364 (-0.017577) | 0.532391 / 0.540337 (-0.007947) | 0.628572 / 1.386936 (-0.758364) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3929cc227a474ce0c716146c8d14ae94f8a7625b \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006469 / 0.011353 (-0.004884) | 0.004499 / 0.011008 (-0.006510) | 0.098856 / 0.038508 (0.060348) | 0.027753 / 0.023109 (0.004644) | 0.321348 / 0.275898 (0.045450) | 0.351480 / 0.323480 (0.028000) | 0.004949 / 0.007986 (-0.003036) | 0.004655 / 0.004328 (0.000327) | 0.076732 / 0.004250 (0.072482) | 0.036175 / 0.037052 (-0.000878) | 0.310111 / 0.258489 (0.051622) | 0.372427 / 0.293841 (0.078586) | 0.031947 / 0.128546 (-0.096599) | 0.011669 / 0.075646 (-0.063977) | 0.323086 / 0.419271 (-0.096186) | 0.043578 / 0.043533 (0.000045) | 0.325549 / 0.255139 (0.070410) | 0.363827 / 0.283200 (0.080627) | 0.087819 / 0.141683 (-0.053864) | 1.479429 / 1.452155 (0.027274) | 1.549797 / 1.492716 (0.057080) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.178502 / 0.018006 (0.160496) | 0.415954 / 0.000490 (0.415465) | 0.008767 / 0.000200 (0.008567) | 0.000429 / 0.000054 (0.000375) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023639 / 0.037411 (-0.013772) | 0.096266 / 0.014526 (0.081740) | 0.106406 / 0.176557 (-0.070151) | 0.168819 / 0.737135 (-0.568317) | 0.109158 / 0.296338 (-0.187181) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420729 / 0.215209 (0.205520) | 4.219469 / 2.077655 (2.141814) | 1.885673 / 1.504120 (0.381553) | 1.681868 / 1.541195 (0.140674) | 1.709240 / 1.468490 (0.240749) | 0.694763 / 4.584777 (-3.890014) | 3.395377 / 3.745712 (-0.350335) | 1.846811 / 5.269862 (-3.423051) | 1.158381 / 4.565676 (-3.407296) | 0.082717 / 0.424275 (-0.341558) | 0.012302 / 0.007607 (0.004695) | 0.518148 / 0.226044 (0.292103) | 5.189590 / 2.268929 (2.920661) | 2.294127 / 55.444624 (-53.150498) | 1.960080 / 6.876477 (-4.916397) | 2.045359 / 2.142072 (-0.096713) | 0.803739 / 4.805227 (-4.001488) | 0.152322 / 6.500664 (-6.348342) | 0.067051 / 0.075469 (-0.008418) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.206582 / 1.841788 (-0.635206) | 13.590515 / 8.074308 (5.516207) | 14.083739 / 10.191392 (3.892347) | 0.128738 / 0.680424 (-0.551686) | 0.016577 / 0.534201 (-0.517624) | 0.375499 / 0.579283 (-0.203784) | 0.383256 / 0.434364 (-0.051108) | 0.439441 / 0.540337 (-0.100896) | 0.518102 / 1.386936 (-0.868834) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006708 / 0.011353 (-0.004645) | 0.004591 / 0.011008 (-0.006417) | 0.076512 / 0.038508 (0.038004) | 0.027977 / 0.023109 (0.004868) | 0.341915 / 0.275898 (0.066017) | 0.374381 / 0.323480 (0.050901) | 0.004985 / 0.007986 (-0.003001) | 0.003374 / 0.004328 (-0.000954) | 0.075334 / 0.004250 (0.071083) | 0.037522 / 0.037052 (0.000470) | 0.341702 / 0.258489 (0.083213) | 0.384342 / 0.293841 (0.090501) | 0.032231 / 0.128546 (-0.096315) | 0.011494 / 0.075646 (-0.064153) | 0.084897 / 0.419271 (-0.334375) | 0.041914 / 0.043533 (-0.001619) | 0.342030 / 0.255139 (0.086891) | 0.371024 / 0.283200 (0.087825) | 0.089936 / 0.141683 (-0.051746) | 1.497242 / 1.452155 (0.045087) | 1.585203 / 1.492716 (0.092486) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227681 / 0.018006 (0.209674) | 0.398995 / 0.000490 (0.398505) | 0.003232 / 0.000200 (0.003032) | 0.000073 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024705 / 0.037411 (-0.012706) | 0.099906 / 0.014526 (0.085380) | 0.106806 / 0.176557 (-0.069750) | 0.157521 / 0.737135 (-0.579614) | 0.110803 / 0.296338 (-0.185535) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.457442 / 0.215209 (0.242233) | 4.580101 / 2.077655 (2.502446) | 2.094687 / 1.504120 (0.590567) | 1.880722 / 1.541195 (0.339528) | 1.938746 / 1.468490 (0.470256) | 0.700933 / 4.584777 (-3.883844) | 3.416278 / 3.745712 (-0.329434) | 2.852183 / 5.269862 (-2.417679) | 1.602659 / 4.565676 (-2.963017) | 0.083949 / 0.424275 (-0.340326) | 0.012255 / 0.007607 (0.004648) | 0.551631 / 0.226044 (0.325586) | 5.539225 / 2.268929 (3.270296) | 2.707298 / 55.444624 (-52.737326) | 2.354720 / 6.876477 (-4.521757) | 2.320790 / 2.142072 (0.178717) | 0.807152 / 4.805227 (-3.998075) | 0.152048 / 6.500664 (-6.348616) | 0.067723 / 0.075469 (-0.007746) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.295690 / 1.841788 (-0.546097) | 13.738082 / 8.074308 (5.663774) | 14.129549 / 10.191392 (3.938157) | 0.161568 / 0.680424 (-0.518855) | 0.016678 / 0.534201 (-0.517522) | 0.386609 / 0.579283 (-0.192674) | 0.383538 / 0.434364 (-0.050826) | 0.477872 / 0.540337 (-0.062465) | 0.564547 / 1.386936 (-0.822389) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2ab4c98618bce7c1f60ce96d4a853a940ae4b250 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007247 / 0.011353 (-0.004106) | 0.005044 / 0.011008 (-0.005964) | 0.095135 / 0.038508 (0.056627) | 0.033622 / 0.023109 (0.010513) | 0.309969 / 0.275898 (0.034071) | 0.340354 / 0.323480 (0.016875) | 0.005635 / 0.007986 (-0.002351) | 0.003938 / 0.004328 (-0.000391) | 0.072089 / 0.004250 (0.067838) | 0.045592 / 0.037052 (0.008539) | 0.316620 / 0.258489 (0.058131) | 0.358174 / 0.293841 (0.064333) | 0.036446 / 0.128546 (-0.092100) | 0.011961 / 0.075646 (-0.063685) | 0.332299 / 0.419271 (-0.086973) | 0.049955 / 0.043533 (0.006422) | 0.307638 / 0.255139 (0.052499) | 0.331719 / 0.283200 (0.048519) | 0.095115 / 0.141683 (-0.046568) | 1.457960 / 1.452155 (0.005806) | 1.502812 / 1.492716 (0.010096) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223747 / 0.018006 (0.205740) | 0.444837 / 0.000490 (0.444347) | 0.002583 / 0.000200 (0.002383) | 0.000084 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026461 / 0.037411 (-0.010951) | 0.103946 / 0.014526 (0.089420) | 0.114355 / 0.176557 (-0.062201) | 0.170076 / 0.737135 (-0.567059) | 0.121087 / 0.296338 (-0.175252) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.403252 / 0.215209 (0.188043) | 4.016911 / 2.077655 (1.939257) | 1.787168 / 1.504120 (0.283048) | 1.605206 / 1.541195 (0.064012) | 1.657012 / 1.468490 (0.188522) | 0.701425 / 4.584777 (-3.883352) | 3.818308 / 3.745712 (0.072596) | 3.493757 / 5.269862 (-1.776105) | 1.860534 / 4.565676 (-2.705142) | 0.084994 / 0.424275 (-0.339281) | 0.011904 / 0.007607 (0.004297) | 0.534199 / 0.226044 (0.308155) | 4.992703 / 2.268929 (2.723774) | 2.286231 / 55.444624 (-53.158393) | 1.918163 / 6.876477 (-4.958314) | 2.029811 / 2.142072 (-0.112262) | 0.837532 / 4.805227 (-3.967695) | 0.168545 / 6.500664 (-6.332119) | 0.062866 / 0.075469 (-0.012604) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.172862 / 1.841788 (-0.668926) | 14.966793 / 8.074308 (6.892485) | 14.202079 / 10.191392 (4.010687) | 0.144688 / 0.680424 (-0.535736) | 0.017499 / 0.534201 (-0.516702) | 0.443081 / 0.579283 (-0.136202) | 0.427496 / 0.434364 (-0.006868) | 0.525182 / 0.540337 (-0.015155) | 0.611849 / 1.386936 (-0.775087) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007264 / 0.011353 (-0.004089) | 0.005106 / 0.011008 (-0.005902) | 0.074101 / 0.038508 (0.035593) | 0.033388 / 0.023109 (0.010279) | 0.337108 / 0.275898 (0.061210) | 0.369820 / 0.323480 (0.046340) | 0.005701 / 0.007986 (-0.002284) | 0.003976 / 0.004328 (-0.000353) | 0.073517 / 0.004250 (0.069267) | 0.048741 / 0.037052 (0.011688) | 0.339118 / 0.258489 (0.080629) | 0.398687 / 0.293841 (0.104846) | 0.036661 / 0.128546 (-0.091886) | 0.012082 / 0.075646 (-0.063564) | 0.086743 / 0.419271 (-0.332529) | 0.050150 / 0.043533 (0.006617) | 0.335572 / 0.255139 (0.080433) | 0.354306 / 0.283200 (0.071107) | 0.102074 / 0.141683 (-0.039609) | 1.442911 / 1.452155 (-0.009244) | 1.531564 / 1.492716 (0.038848) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.183163 / 0.018006 (0.165157) | 0.439273 / 0.000490 (0.438783) | 0.002765 / 0.000200 (0.002565) | 0.000225 / 0.000054 (0.000171) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028185 / 0.037411 (-0.009227) | 0.107337 / 0.014526 (0.092811) | 0.119925 / 0.176557 (-0.056631) | 0.172120 / 0.737135 (-0.565015) | 0.124332 / 0.296338 (-0.172007) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.428750 / 0.215209 (0.213541) | 4.268933 / 2.077655 (2.191279) | 2.050135 / 1.504120 (0.546015) | 1.837567 / 1.541195 (0.296372) | 1.907040 / 1.468490 (0.438549) | 0.694162 / 4.584777 (-3.890615) | 3.831542 / 3.745712 (0.085830) | 3.476580 / 5.269862 (-1.793281) | 1.855097 / 4.565676 (-2.710580) | 0.085816 / 0.424275 (-0.338459) | 0.012195 / 0.007607 (0.004588) | 0.544920 / 0.226044 (0.318876) | 5.332977 / 2.268929 (3.064049) | 2.592097 / 55.444624 (-52.852527) | 2.295411 / 6.876477 (-4.581065) | 2.330803 / 2.142072 (0.188730) | 0.833268 / 4.805227 (-3.971959) | 0.177698 / 6.500664 (-6.322966) | 0.063780 / 0.075469 (-0.011689) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.273361 / 1.841788 (-0.568427) | 14.981380 / 8.074308 (6.907072) | 14.395166 / 10.191392 (4.203774) | 0.186590 / 0.680424 (-0.493834) | 0.017676 / 0.534201 (-0.516525) | 0.432100 / 0.579283 (-0.147183) | 0.422490 / 0.434364 (-0.011874) | 0.531421 / 0.540337 (-0.008916) | 0.628548 / 1.386936 (-0.758388) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3b16e08dd599f4646a77a5ca88b6445467e1e7e9 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009005 / 0.011353 (-0.002348) | 0.005803 / 0.011008 (-0.005205) | 0.103491 / 0.038508 (0.064983) | 0.048099 / 0.023109 (0.024990) | 0.304026 / 0.275898 (0.028128) | 0.340840 / 0.323480 (0.017360) | 0.006782 / 0.007986 (-0.001204) | 0.004625 / 0.004328 (0.000296) | 0.076695 / 0.004250 (0.072445) | 0.057541 / 0.037052 (0.020489) | 0.304015 / 0.258489 (0.045526) | 0.347822 / 0.293841 (0.053981) | 0.037904 / 0.128546 (-0.090642) | 0.012686 / 0.075646 (-0.062960) | 0.368093 / 0.419271 (-0.051179) | 0.051795 / 0.043533 (0.008262) | 0.302553 / 0.255139 (0.047415) | 0.328581 / 0.283200 (0.045381) | 0.108947 / 0.141683 (-0.032736) | 1.449770 / 1.452155 (-0.002385) | 1.541944 / 1.492716 (0.049227) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207529 / 0.018006 (0.189523) | 0.455313 / 0.000490 (0.454823) | 0.008276 / 0.000200 (0.008076) | 0.000322 / 0.000054 (0.000268) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030564 / 0.037411 (-0.006848) | 0.122790 / 0.014526 (0.108264) | 0.126981 / 0.176557 (-0.049576) | 0.187203 / 0.737135 (-0.549932) | 0.129931 / 0.296338 (-0.166408) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.402680 / 0.215209 (0.187471) | 4.017505 / 2.077655 (1.939850) | 1.801480 / 1.504120 (0.297360) | 1.647984 / 1.541195 (0.106790) | 1.702596 / 1.468490 (0.234106) | 0.717469 / 4.584777 (-3.867308) | 3.793813 / 3.745712 (0.048101) | 2.288014 / 5.269862 (-2.981848) | 1.497545 / 4.565676 (-3.068132) | 0.091241 / 0.424275 (-0.333034) | 0.013115 / 0.007607 (0.005508) | 0.498567 / 0.226044 (0.272522) | 4.990203 / 2.268929 (2.721275) | 2.334983 / 55.444624 (-53.109642) | 2.047888 / 6.876477 (-4.828589) | 2.167825 / 2.142072 (0.025753) | 0.863769 / 4.805227 (-3.941459) | 0.172699 / 6.500664 (-6.327965) | 0.069285 / 0.075469 (-0.006184) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.397331 / 1.841788 (-0.444457) | 16.678240 / 8.074308 (8.603932) | 16.665143 / 10.191392 (6.473751) | 0.151011 / 0.680424 (-0.529412) | 0.018303 / 0.534201 (-0.515898) | 0.445389 / 0.579283 (-0.133894) | 0.444644 / 0.434364 (0.010280) | 0.524647 / 0.540337 (-0.015690) | 0.629747 / 1.386936 (-0.757189) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008853 / 0.011353 (-0.002499) | 0.006196 / 0.011008 (-0.004813) | 0.078595 / 0.038508 (0.040087) | 0.048348 / 0.023109 (0.025239) | 0.347038 / 0.275898 (0.071140) | 0.385807 / 0.323480 (0.062327) | 0.007047 / 0.007986 (-0.000938) | 0.004772 / 0.004328 (0.000443) | 0.076116 / 0.004250 (0.071866) | 0.058805 / 0.037052 (0.021752) | 0.345731 / 0.258489 (0.087242) | 0.401589 / 0.293841 (0.107748) | 0.039349 / 0.128546 (-0.089197) | 0.012949 / 0.075646 (-0.062697) | 0.089761 / 0.419271 (-0.329511) | 0.060001 / 0.043533 (0.016468) | 0.351587 / 0.255139 (0.096448) | 0.377708 / 0.283200 (0.094509) | 0.117391 / 0.141683 (-0.024292) | 1.471622 / 1.452155 (0.019467) | 1.568759 / 1.492716 (0.076042) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.191390 / 0.018006 (0.173384) | 0.469033 / 0.000490 (0.468544) | 0.003615 / 0.000200 (0.003415) | 0.000113 / 0.000054 (0.000059) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032706 / 0.037411 (-0.004706) | 0.127095 / 0.014526 (0.112569) | 0.128755 / 0.176557 (-0.047801) | 0.182590 / 0.737135 (-0.554545) | 0.136939 / 0.296338 (-0.159400) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.427392 / 0.215209 (0.212183) | 4.246708 / 2.077655 (2.169053) | 2.115557 / 1.504120 (0.611437) | 2.021221 / 1.541195 (0.480026) | 2.177559 / 1.468490 (0.709069) | 0.713930 / 4.584777 (-3.870847) | 4.192467 / 3.745712 (0.446755) | 3.645437 / 5.269862 (-1.624424) | 1.964986 / 4.565676 (-2.600690) | 0.089436 / 0.424275 (-0.334839) | 0.012917 / 0.007607 (0.005310) | 0.530468 / 0.226044 (0.304423) | 5.310759 / 2.268929 (3.041831) | 2.613566 / 55.444624 (-52.831058) | 2.350443 / 6.876477 (-4.526034) | 2.385278 / 2.142072 (0.243205) | 0.862838 / 4.805227 (-3.942389) | 0.172246 / 6.500664 (-6.328418) | 0.069570 / 0.075469 (-0.005899) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.310008 / 1.841788 (-0.531780) | 16.557079 / 8.074308 (8.482771) | 15.818145 / 10.191392 (5.626752) | 0.180337 / 0.680424 (-0.500087) | 0.018117 / 0.534201 (-0.516083) | 0.433189 / 0.579283 (-0.146095) | 0.429276 / 0.434364 (-0.005088) | 0.539757 / 0.540337 (-0.000580) | 0.640905 / 1.386936 (-0.746031) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3b16e08dd599f4646a77a5ca88b6445467e1e7e9 \"CML watermark\")\n" ]
"2023-03-29T15:06:07"
"2023-03-29T18:30:34"
"2023-03-29T18:15:54"
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5684/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5684/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5684", "html_url": "https://github.com/huggingface/datasets/pull/5684", "diff_url": "https://github.com/huggingface/datasets/pull/5684.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5684.patch", "merged_at": "2023-03-29T18:15:54" }
true
https://api.github.com/repos/huggingface/datasets/issues/5683
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5683/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5683/comments
https://api.github.com/repos/huggingface/datasets/issues/5683/events
https://github.com/huggingface/datasets/pull/5683
1,646,001,197
PR_kwDODunzps5NLUq1
5,683
Fix verification_mode when ignore_verifications is passed
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006935 / 0.011353 (-0.004418) | 0.004711 / 0.011008 (-0.006297) | 0.098461 / 0.038508 (0.059953) | 0.028889 / 0.023109 (0.005780) | 0.332167 / 0.275898 (0.056269) | 0.363309 / 0.323480 (0.039829) | 0.005179 / 0.007986 (-0.002807) | 0.004783 / 0.004328 (0.000455) | 0.074293 / 0.004250 (0.070043) | 0.038778 / 0.037052 (0.001726) | 0.318871 / 0.258489 (0.060382) | 0.362975 / 0.293841 (0.069134) | 0.032897 / 0.128546 (-0.095649) | 0.011685 / 0.075646 (-0.063961) | 0.322824 / 0.419271 (-0.096447) | 0.043842 / 0.043533 (0.000309) | 0.334789 / 0.255139 (0.079650) | 0.352922 / 0.283200 (0.069723) | 0.089692 / 0.141683 (-0.051991) | 1.490110 / 1.452155 (0.037955) | 1.601530 / 1.492716 (0.108813) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.201882 / 0.018006 (0.183875) | 0.410875 / 0.000490 (0.410385) | 0.002472 / 0.000200 (0.002272) | 0.000073 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023636 / 0.037411 (-0.013775) | 0.102168 / 0.014526 (0.087642) | 0.107247 / 0.176557 (-0.069310) | 0.171858 / 0.737135 (-0.565278) | 0.110619 / 0.296338 (-0.185720) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.433740 / 0.215209 (0.218531) | 4.332121 / 2.077655 (2.254466) | 2.075398 / 1.504120 (0.571278) | 1.941074 / 1.541195 (0.399879) | 2.033331 / 1.468490 (0.564841) | 0.697134 / 4.584777 (-3.887643) | 3.463855 / 3.745712 (-0.281857) | 3.080446 / 5.269862 (-2.189416) | 1.575020 / 4.565676 (-2.990656) | 0.083054 / 0.424275 (-0.341221) | 0.012454 / 0.007607 (0.004847) | 0.537996 / 0.226044 (0.311951) | 5.366765 / 2.268929 (3.097836) | 2.464398 / 55.444624 (-52.980227) | 2.143912 / 6.876477 (-4.732564) | 2.245706 / 2.142072 (0.103634) | 0.801397 / 4.805227 (-4.003831) | 0.150954 / 6.500664 (-6.349710) | 0.066758 / 0.075469 (-0.008711) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.216412 / 1.841788 (-0.625376) | 13.679322 / 8.074308 (5.605014) | 14.055286 / 10.191392 (3.863894) | 0.130264 / 0.680424 (-0.550160) | 0.016566 / 0.534201 (-0.517635) | 0.379126 / 0.579283 (-0.200157) | 0.390815 / 0.434364 (-0.043549) | 0.437586 / 0.540337 (-0.102751) | 0.526822 / 1.386936 (-0.860114) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006898 / 0.011353 (-0.004455) | 0.004705 / 0.011008 (-0.006304) | 0.078592 / 0.038508 (0.040084) | 0.028635 / 0.023109 (0.005525) | 0.340143 / 0.275898 (0.064245) | 0.377526 / 0.323480 (0.054047) | 0.005645 / 0.007986 (-0.002340) | 0.003533 / 0.004328 (-0.000796) | 0.078441 / 0.004250 (0.074191) | 0.039408 / 0.037052 (0.002356) | 0.342303 / 0.258489 (0.083814) | 0.386837 / 0.293841 (0.092996) | 0.032427 / 0.128546 (-0.096119) | 0.011763 / 0.075646 (-0.063883) | 0.087984 / 0.419271 (-0.331287) | 0.042126 / 0.043533 (-0.001406) | 0.339951 / 0.255139 (0.084812) | 0.366165 / 0.283200 (0.082966) | 0.091414 / 0.141683 (-0.050269) | 1.502034 / 1.452155 (0.049880) | 1.597901 / 1.492716 (0.105184) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.232122 / 0.018006 (0.214115) | 0.410205 / 0.000490 (0.409715) | 0.000418 / 0.000200 (0.000218) | 0.000063 / 0.000054 (0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026013 / 0.037411 (-0.011399) | 0.105520 / 0.014526 (0.090995) | 0.108649 / 0.176557 (-0.067908) | 0.159324 / 0.737135 (-0.577811) | 0.114033 / 0.296338 (-0.182306) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.455634 / 0.215209 (0.240425) | 4.508544 / 2.077655 (2.430889) | 2.087065 / 1.504120 (0.582945) | 1.872622 / 1.541195 (0.331427) | 1.935617 / 1.468490 (0.467127) | 0.696909 / 4.584777 (-3.887868) | 3.449365 / 3.745712 (-0.296348) | 3.008399 / 5.269862 (-2.261462) | 1.459245 / 4.565676 (-3.106431) | 0.083637 / 0.424275 (-0.340638) | 0.012358 / 0.007607 (0.004750) | 0.547232 / 0.226044 (0.321187) | 5.522395 / 2.268929 (3.253466) | 2.691019 / 55.444624 (-52.753605) | 2.408083 / 6.876477 (-4.468394) | 2.369239 / 2.142072 (0.227166) | 0.807148 / 4.805227 (-3.998080) | 0.152030 / 6.500664 (-6.348634) | 0.067883 / 0.075469 (-0.007586) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.336956 / 1.841788 (-0.504832) | 14.403730 / 8.074308 (6.329422) | 14.854084 / 10.191392 (4.662692) | 0.146530 / 0.680424 (-0.533894) | 0.016611 / 0.534201 (-0.517590) | 0.398557 / 0.579283 (-0.180726) | 0.393194 / 0.434364 (-0.041170) | 0.486824 / 0.540337 (-0.053513) | 0.572844 / 1.386936 (-0.814092) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#411f9cc281e50954ea0c903e7a0a6618b3d31b9e \"CML watermark\")\n" ]
"2023-03-29T15:00:50"
"2023-03-29T17:36:06"
"2023-03-29T17:28:57"
MEMBER
null
This PR fixes the values assigned to `verification_mode` when passing `ignore_verifications` to `load_dataset`. Related to: - #5303 Fix #5682.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5683/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5683/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5683", "html_url": "https://github.com/huggingface/datasets/pull/5683", "diff_url": "https://github.com/huggingface/datasets/pull/5683.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5683.patch", "merged_at": "2023-03-29T17:28:57" }
true
https://api.github.com/repos/huggingface/datasets/issues/5682
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5682/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5682/comments
https://api.github.com/repos/huggingface/datasets/issues/5682/events
https://github.com/huggingface/datasets/issues/5682
1,646,000,571
I_kwDODunzps5iG_m7
5,682
ValueError when passing ignore_verifications
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
"2023-03-29T15:00:30"
"2023-03-29T17:28:58"
"2023-03-29T17:28:58"
MEMBER
null
When passing `ignore_verifications=True` to `load_dataset`, we get a ValueError: ``` ValueError: 'none' is not a valid VerificationMode ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5682/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5682/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5681
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5681/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5681/comments
https://api.github.com/repos/huggingface/datasets/issues/5681/events
https://github.com/huggingface/datasets/issues/5681
1,645,630,784
I_kwDODunzps5iFlVA
5,681
Add information about patterns search order to the doc about structuring repo
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[ { "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false } ]
null
[ "Good idea, I think I've seen this a couple of times before too on the forums. I can work on this :)", "Closed in #5693 " ]
"2023-03-29T11:44:49"
"2023-04-03T18:31:11"
"2023-04-03T18:31:11"
CONTRIBUTOR
null
Following [this](https://github.com/huggingface/datasets/issues/5650) issue I think we should add a note about the order of patterns that is used to find splits, see [my comment](https://github.com/huggingface/datasets/issues/5650#issuecomment-1488412527). Also we should reference this page in pages about packaged loaders. I have a déjà vu that it had already been discussed as some point but I don't remember....
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5681/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5681/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5680
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5680/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5680/comments
https://api.github.com/repos/huggingface/datasets/issues/5680/events
https://github.com/huggingface/datasets/pull/5680
1,645,430,103
PR_kwDODunzps5NJYNz
5,680
Fix a description error for interleave_datasets.
{ "login": "QizhiPei", "id": 55624066, "node_id": "MDQ6VXNlcjU1NjI0MDY2", "avatar_url": "https://avatars.githubusercontent.com/u/55624066?v=4", "gravatar_id": "", "url": "https://api.github.com/users/QizhiPei", "html_url": "https://github.com/QizhiPei", "followers_url": "https://api.github.com/users/QizhiPei/followers", "following_url": "https://api.github.com/users/QizhiPei/following{/other_user}", "gists_url": "https://api.github.com/users/QizhiPei/gists{/gist_id}", "starred_url": "https://api.github.com/users/QizhiPei/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/QizhiPei/subscriptions", "organizations_url": "https://api.github.com/users/QizhiPei/orgs", "repos_url": "https://api.github.com/users/QizhiPei/repos", "events_url": "https://api.github.com/users/QizhiPei/events{/privacy}", "received_events_url": "https://api.github.com/users/QizhiPei/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006772 / 0.011353 (-0.004581) | 0.004674 / 0.011008 (-0.006335) | 0.098702 / 0.038508 (0.060194) | 0.028257 / 0.023109 (0.005148) | 0.368008 / 0.275898 (0.092110) | 0.402825 / 0.323480 (0.079345) | 0.005158 / 0.007986 (-0.002828) | 0.003470 / 0.004328 (-0.000858) | 0.075541 / 0.004250 (0.071291) | 0.039755 / 0.037052 (0.002702) | 0.373431 / 0.258489 (0.114942) | 0.410159 / 0.293841 (0.116318) | 0.031355 / 0.128546 (-0.097192) | 0.011632 / 0.075646 (-0.064014) | 0.325475 / 0.419271 (-0.093797) | 0.042574 / 0.043533 (-0.000958) | 0.373629 / 0.255139 (0.118490) | 0.393921 / 0.283200 (0.110721) | 0.084669 / 0.141683 (-0.057013) | 1.459947 / 1.452155 (0.007792) | 1.529593 / 1.492716 (0.036877) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.189994 / 0.018006 (0.171988) | 0.409091 / 0.000490 (0.408602) | 0.003693 / 0.000200 (0.003493) | 0.000072 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024649 / 0.037411 (-0.012762) | 0.097702 / 0.014526 (0.083177) | 0.103650 / 0.176557 (-0.072906) | 0.167141 / 0.737135 (-0.569994) | 0.108460 / 0.296338 (-0.187879) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.429544 / 0.215209 (0.214335) | 4.277106 / 2.077655 (2.199451) | 2.018745 / 1.504120 (0.514625) | 1.814782 / 1.541195 (0.273587) | 1.897030 / 1.468490 (0.428540) | 0.700332 / 4.584777 (-3.884445) | 3.421761 / 3.745712 (-0.323951) | 3.008281 / 5.269862 (-2.261581) | 1.554230 / 4.565676 (-3.011446) | 0.082922 / 0.424275 (-0.341353) | 0.012312 / 0.007607 (0.004705) | 0.527757 / 0.226044 (0.301713) | 5.287450 / 2.268929 (3.018522) | 2.329083 / 55.444624 (-53.115542) | 2.016651 / 6.876477 (-4.859826) | 2.214510 / 2.142072 (0.072437) | 0.807676 / 4.805227 (-3.997551) | 0.151752 / 6.500664 (-6.348912) | 0.066819 / 0.075469 (-0.008651) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.239522 / 1.841788 (-0.602266) | 13.923672 / 8.074308 (5.849364) | 14.317394 / 10.191392 (4.126002) | 0.159379 / 0.680424 (-0.521045) | 0.016537 / 0.534201 (-0.517664) | 0.376808 / 0.579283 (-0.202475) | 0.376351 / 0.434364 (-0.058012) | 0.437124 / 0.540337 (-0.103213) | 0.520589 / 1.386936 (-0.866347) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006892 / 0.011353 (-0.004461) | 0.004671 / 0.011008 (-0.006337) | 0.075841 / 0.038508 (0.037333) | 0.028713 / 0.023109 (0.005604) | 0.345105 / 0.275898 (0.069207) | 0.380694 / 0.323480 (0.057214) | 0.005155 / 0.007986 (-0.002830) | 0.003379 / 0.004328 (-0.000949) | 0.075134 / 0.004250 (0.070883) | 0.039990 / 0.037052 (0.002938) | 0.345540 / 0.258489 (0.087051) | 0.389913 / 0.293841 (0.096072) | 0.032089 / 0.128546 (-0.096458) | 0.011583 / 0.075646 (-0.064063) | 0.085169 / 0.419271 (-0.334102) | 0.041847 / 0.043533 (-0.001686) | 0.341504 / 0.255139 (0.086365) | 0.367582 / 0.283200 (0.084382) | 0.092684 / 0.141683 (-0.048999) | 1.498647 / 1.452155 (0.046492) | 1.549056 / 1.492716 (0.056339) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228643 / 0.018006 (0.210637) | 0.410680 / 0.000490 (0.410191) | 0.000398 / 0.000200 (0.000198) | 0.000059 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025354 / 0.037411 (-0.012057) | 0.101567 / 0.014526 (0.087041) | 0.108340 / 0.176557 (-0.068217) | 0.157804 / 0.737135 (-0.579332) | 0.113985 / 0.296338 (-0.182354) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436427 / 0.215209 (0.221218) | 4.359331 / 2.077655 (2.281676) | 2.047877 / 1.504120 (0.543757) | 1.844242 / 1.541195 (0.303047) | 1.924553 / 1.468490 (0.456063) | 0.695986 / 4.584777 (-3.888791) | 3.435571 / 3.745712 (-0.310141) | 1.905189 / 5.269862 (-3.364673) | 1.198542 / 4.565676 (-3.367134) | 0.083386 / 0.424275 (-0.340889) | 0.012442 / 0.007607 (0.004835) | 0.542562 / 0.226044 (0.316517) | 5.416554 / 2.268929 (3.147625) | 2.499496 / 55.444624 (-52.945128) | 2.160658 / 6.876477 (-4.715819) | 2.210535 / 2.142072 (0.068462) | 0.803324 / 4.805227 (-4.001903) | 0.151735 / 6.500664 (-6.348929) | 0.068392 / 0.075469 (-0.007078) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.319915 / 1.841788 (-0.521873) | 14.176755 / 8.074308 (6.102446) | 14.376366 / 10.191392 (4.184974) | 0.141219 / 0.680424 (-0.539204) | 0.017181 / 0.534201 (-0.517020) | 0.383589 / 0.579283 (-0.195694) | 0.389352 / 0.434364 (-0.045012) | 0.474465 / 0.540337 (-0.065873) | 0.563047 / 1.386936 (-0.823889) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c33e8ce68b5000988bf6b2e4bca27ffaa469acea \"CML watermark\")\n" ]
"2023-03-29T09:50:23"
"2023-03-30T13:14:19"
"2023-03-30T13:07:18"
CONTRIBUTOR
null
There is a description mistake in the annotation of interleave_dataset with "all_exhausted" stopping_strategy. ``` python d1 = Dataset.from_dict({"a": [0, 1, 2]}) d2 = Dataset.from_dict({"a": [10, 11, 12, 13]}) d3 = Dataset.from_dict({"a": [20, 21, 22, 23, 24]}) dataset = interleave_datasets([d1, d2, d3], stopping_strategy="all_exhausted") ``` According to the interleave way, the correct output of `dataset["a"]` is `[0, 10, 20, 1, 11, 21, 2, 12, 22, 0, 13, 23, 1, 10, 24]`, not `[0, 10, 20, 1, 11, 21, 2, 12, 22, 0, 13, 23, 1, 0, 24]`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5680/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5680/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5680", "html_url": "https://github.com/huggingface/datasets/pull/5680", "diff_url": "https://github.com/huggingface/datasets/pull/5680.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5680.patch", "merged_at": "2023-03-30T13:07:18" }
true
https://api.github.com/repos/huggingface/datasets/issues/5679
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5679/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5679/comments
https://api.github.com/repos/huggingface/datasets/issues/5679/events
https://github.com/huggingface/datasets/issues/5679
1,645,184,622
I_kwDODunzps5iD4Zu
5,679
Allow load_dataset to take a working dir for intermediate data
{ "login": "lu-wang-dl", "id": 38018689, "node_id": "MDQ6VXNlcjM4MDE4Njg5", "avatar_url": "https://avatars.githubusercontent.com/u/38018689?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lu-wang-dl", "html_url": "https://github.com/lu-wang-dl", "followers_url": "https://api.github.com/users/lu-wang-dl/followers", "following_url": "https://api.github.com/users/lu-wang-dl/following{/other_user}", "gists_url": "https://api.github.com/users/lu-wang-dl/gists{/gist_id}", "starred_url": "https://api.github.com/users/lu-wang-dl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lu-wang-dl/subscriptions", "organizations_url": "https://api.github.com/users/lu-wang-dl/orgs", "repos_url": "https://api.github.com/users/lu-wang-dl/repos", "events_url": "https://api.github.com/users/lu-wang-dl/events{/privacy}", "received_events_url": "https://api.github.com/users/lu-wang-dl/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Hi ! AFAIK a dataset must be present on a local disk to be able to efficiently memory map the datasets Arrow files. What makes you think that it is possible to load from a cloud storage and have good performance ?\r\n\r\nAnyway it's already possible to download_and_prepare a dataset as Arrow files in a cloud storage with:\r\n```python\r\nbuilder = load_dataset_builder(..., cache_dir=\"/temp/dir\")\r\nbuilder.download_and_prepare(\"/cloud_dir\")\r\n```\r\n\r\nbut then \r\n```python\r\nds = builder.as_dataset()\r\n```\r\nwould fail if \"/cloud_dir\" is not a local directory.", "In my use case, I am trying to mount the S3 bucket as local system with S3FS-FUSE / [goofys](https://github.com/kahing/goofys). I want to use S3 to save the download data and save checkpoint for training for persistent. Setting the s3 location as cache directory is not fast enough. That is why I want to set a work directory for temp data for memory map and only save the final result to s3 cache. ", "You can try setting `HF_DATASETS_DOWNLOADED_DATASETS_PATH` and `HF_DATASETS_EXTRACTED_DATASETS_PATH` to S3, and `HF_DATASETS_CACHE` to your local disk.\r\n\r\nThis way all your downloaded and extracted data are on your mounted S3, but the datasets Arrow files are on your local disk", "If we hope to also persist the Arrow files on the mounted S3 but work with the efficiency of local disk, is there any recommended way to do this, other than copying the Arrow files from local disk to S3?" ]
"2023-03-29T07:21:09"
"2023-04-12T22:30:25"
null
NONE
null
### Feature request As a user, I can set a working dir for intermediate data creation. The processed files will be moved to the cache dir, like ``` load_dataset(…, working_dir=”/temp/dir”, cache_dir=”/cloud_dir”). ``` ### Motivation This will help the use case for using datasets with cloud storage as cache. It will help boost the performance. ### Your contribution I can provide a PR to fix this if the proposal seems reasonable.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5679/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5679/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5678
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5678/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5678/comments
https://api.github.com/repos/huggingface/datasets/issues/5678/events
https://github.com/huggingface/datasets/issues/5678
1,645,018,359
I_kwDODunzps5iDPz3
5,678
Add support to create a Dataset from spark dataframe
{ "login": "lu-wang-dl", "id": 38018689, "node_id": "MDQ6VXNlcjM4MDE4Njg5", "avatar_url": "https://avatars.githubusercontent.com/u/38018689?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lu-wang-dl", "html_url": "https://github.com/lu-wang-dl", "followers_url": "https://api.github.com/users/lu-wang-dl/followers", "following_url": "https://api.github.com/users/lu-wang-dl/following{/other_user}", "gists_url": "https://api.github.com/users/lu-wang-dl/gists{/gist_id}", "starred_url": "https://api.github.com/users/lu-wang-dl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lu-wang-dl/subscriptions", "organizations_url": "https://api.github.com/users/lu-wang-dl/orgs", "repos_url": "https://api.github.com/users/lu-wang-dl/repos", "events_url": "https://api.github.com/users/lu-wang-dl/events{/privacy}", "received_events_url": "https://api.github.com/users/lu-wang-dl/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "if i read spark Dataframe , got an error on multi-node Spark cluster.\r\nDid the Api (Dataset.from_spark) support Spark cluster, read dataframe and save_to_disk?\r\n\r\nError: \r\n_pickle.PicklingError: Could not serialize object: RuntimeError: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transforma\r\ntion. SparkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063.\r\n23/06/16 21:17:20 WARN ExecutorPodsWatchSnapshotSource: Kubernetes client has been closed (this is expected if the application is shutting down.)\r\n\r\n", "How to perform predictions on Dataset object in Spark with multi-node cluster parallelism?", "Addressed in #5701" ]
"2023-03-29T04:36:28"
"2023-07-21T14:15:38"
"2023-07-21T14:15:38"
NONE
null
### Feature request Add a new API `Dataset.from_spark` to create a Dataset from Spark DataFrame. ### Motivation Spark is a distributed computing framework that can handle large datasets. By supporting loading Spark DataFrames directly into Hugging Face Datasets, we enable take the advantages of spark to processing the data in parallel. By providing a seamless integration between these two frameworks, we make it easier for data scientists and developers to work with both Spark and Hugging Face in the same workflow. ### Your contribution We can discuss about the ideas and I can help preparing a PR for this feature.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5678/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5678/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5677
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5677/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5677/comments
https://api.github.com/repos/huggingface/datasets/issues/5677/events
https://github.com/huggingface/datasets/issues/5677
1,644,828,606
I_kwDODunzps5iChe-
5,677
Dataset.map() crashes when any column contains more than 1000 empty dictionaries
{ "login": "mtoles", "id": 7139344, "node_id": "MDQ6VXNlcjcxMzkzNDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/7139344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mtoles", "html_url": "https://github.com/mtoles", "followers_url": "https://api.github.com/users/mtoles/followers", "following_url": "https://api.github.com/users/mtoles/following{/other_user}", "gists_url": "https://api.github.com/users/mtoles/gists{/gist_id}", "starred_url": "https://api.github.com/users/mtoles/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mtoles/subscriptions", "organizations_url": "https://api.github.com/users/mtoles/orgs", "repos_url": "https://api.github.com/users/mtoles/repos", "events_url": "https://api.github.com/users/mtoles/events{/privacy}", "received_events_url": "https://api.github.com/users/mtoles/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
[]
"2023-03-29T00:01:31"
"2023-07-07T14:01:14"
"2023-07-07T14:01:14"
NONE
null
### Describe the bug `Dataset.map()` crashes any time any column contains more than `writer_batch_size` (default 1000) empty dictionaries, regardless of whether the column is being operated on. The error does not occur if the dictionaries are non-empty. ### Steps to reproduce the bug Example: ``` import datasets def add_one(example): example["col2"] += 1 return example n = 1001 # crashes # n = 999 # works ds = datasets.Dataset.from_dict({"col1": [{}] * n, "col2": [1] * n}) ds = ds.map(add_one, writer_batch_size=1000) ``` ### Expected behavior Above code should not crash ### Environment info - `datasets` version: 2.10.1 - Platform: Linux-5.4.0-120-generic-x86_64-with-glibc2.10 - Python version: 3.8.15 - PyArrow version: 9.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5677/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5677/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5675
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5675/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5675/comments
https://api.github.com/repos/huggingface/datasets/issues/5675/events
https://github.com/huggingface/datasets/issues/5675
1,641,763,478
I_kwDODunzps5h21KW
5,675
Filter datasets by language code
{ "login": "named-entity", "id": 5658496, "node_id": "MDQ6VXNlcjU2NTg0OTY=", "avatar_url": "https://avatars.githubusercontent.com/u/5658496?v=4", "gravatar_id": "", "url": "https://api.github.com/users/named-entity", "html_url": "https://github.com/named-entity", "followers_url": "https://api.github.com/users/named-entity/followers", "following_url": "https://api.github.com/users/named-entity/following{/other_user}", "gists_url": "https://api.github.com/users/named-entity/gists{/gist_id}", "starred_url": "https://api.github.com/users/named-entity/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/named-entity/subscriptions", "organizations_url": "https://api.github.com/users/named-entity/orgs", "repos_url": "https://api.github.com/users/named-entity/repos", "events_url": "https://api.github.com/users/named-entity/events{/privacy}", "received_events_url": "https://api.github.com/users/named-entity/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The dataset still can be found, if instead of using the search form you just enter the language code in the url, like https://huggingface.co/datasets?language=language:myv. \r\n\r\nBut of course having a more complete list of languages in the search form (or just a fallback to the language codes, if they are missing from the code=>language mapping) would be much more convenient!", "Hi! I've opened a PR to make these languages searchable on the Hub.", "Thanks @mariosasko!\r\nDo you think it is possible to turn this into a more scalable pipeline? Such as:\r\n1. Looping through all the datasets on the hub and collecting the set of all their language codes;\r\n2. Selecting the codes not covered yet in `Language.ts`\r\n3. Looking up their codes at https://iso639-3.sil.org/code_tables/639/data\r\n4. Adding all the newly found language codes to `Language.ts`", "@avidale This has been discussed in https://github.com/huggingface/datasets/issues/4881, so also feel free to share your opinion there." ]
"2023-03-27T09:42:28"
"2023-03-30T08:08:15"
"2023-03-30T08:08:15"
NONE
null
Hi! I use the language search field on https://huggingface.co/datasets However, some of the datasets tagged by ISO language code are not accessible by this search form. For example, [myv_ru_2022](https://huggingface.co/datasets/slone/myv_ru_2022) is has `myv` language tag but it is not included in Languages search form. I've also noticed the same problem with `mhr` (see https://huggingface.co/datasets/AigizK/mari-russian-parallel-corpora)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5675/reactions", "total_count": 6, "+1": 6, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5675/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5674
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5674/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5674/comments
https://api.github.com/repos/huggingface/datasets/issues/5674/events
https://github.com/huggingface/datasets/issues/5674
1,641,084,105
I_kwDODunzps5h0PTJ
5,674
Stored XSS
{ "login": "Fadavvi", "id": 21213484, "node_id": "MDQ6VXNlcjIxMjEzNDg0", "avatar_url": "https://avatars.githubusercontent.com/u/21213484?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Fadavvi", "html_url": "https://github.com/Fadavvi", "followers_url": "https://api.github.com/users/Fadavvi/followers", "following_url": "https://api.github.com/users/Fadavvi/following{/other_user}", "gists_url": "https://api.github.com/users/Fadavvi/gists{/gist_id}", "starred_url": "https://api.github.com/users/Fadavvi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Fadavvi/subscriptions", "organizations_url": "https://api.github.com/users/Fadavvi/orgs", "repos_url": "https://api.github.com/users/Fadavvi/repos", "events_url": "https://api.github.com/users/Fadavvi/events{/privacy}", "received_events_url": "https://api.github.com/users/Fadavvi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi! You can contact `security@huggingface.co` to report this vulnerability." ]
"2023-03-26T20:55:58"
"2024-04-30T22:56:41"
"2023-03-27T21:01:55"
NONE
null
x
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5674/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5674/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5673
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5673/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5673/comments
https://api.github.com/repos/huggingface/datasets/issues/5673/events
https://github.com/huggingface/datasets/pull/5673
1,641,066,352
PR_kwDODunzps5M6wc3
5,673
Pass down storage options
{ "login": "dwyatte", "id": 2512762, "node_id": "MDQ6VXNlcjI1MTI3NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/2512762?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dwyatte", "html_url": "https://github.com/dwyatte", "followers_url": "https://api.github.com/users/dwyatte/followers", "following_url": "https://api.github.com/users/dwyatte/following{/other_user}", "gists_url": "https://api.github.com/users/dwyatte/gists{/gist_id}", "starred_url": "https://api.github.com/users/dwyatte/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dwyatte/subscriptions", "organizations_url": "https://api.github.com/users/dwyatte/orgs", "repos_url": "https://api.github.com/users/dwyatte/repos", "events_url": "https://api.github.com/users/dwyatte/events{/privacy}", "received_events_url": "https://api.github.com/users/dwyatte/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "> download_and_prepare is not called when streaming a dataset, so we may need to have storage_options in the DatasetBuilder.__init__ ? This way it could also be passed later to as_streaming_dataset and the StreamingDownloadManager\r\n\r\n> Currently the storage_options parameter in download_and_prepare are for the target filesystem where the dataset must be downloaded and prepared as arrow files\r\n\r\nAh, I noted this when looking for ways to plumb down `storage_options` although I think I was looking at adding to `BuilderConfig`. The `DatasetBuilder` constructor looks more appropriate for this, will get that added in a future commit", "Noting as experimental SGTM. The only tests I can think of to add at the moment would be mocks that assert the storage options get passed all the way down using `mock.assert_called_with` but if Hugging Face has some S3/GCS buckets for testing, maybe those would be better in a future PR. Let me know what you think", "I think adding tests with the mockfs fixture will do the job. Tests and docs can be added when request_etag and is_remote_url support fsspec (right now they would fail with mockfs).\r\n\r\nLet's see in a subsequent PR, this is exciting ! :)", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009217 / 0.011353 (-0.002136) | 0.006275 / 0.011008 (-0.004733) | 0.124361 / 0.038508 (0.085853) | 0.035680 / 0.023109 (0.012570) | 0.395255 / 0.275898 (0.119357) | 0.426104 / 0.323480 (0.102624) | 0.006822 / 0.007986 (-0.001163) | 0.004467 / 0.004328 (0.000138) | 0.099404 / 0.004250 (0.095153) | 0.051919 / 0.037052 (0.014867) | 0.388286 / 0.258489 (0.129797) | 0.426361 / 0.293841 (0.132520) | 0.053100 / 0.128546 (-0.075446) | 0.019453 / 0.075646 (-0.056194) | 0.433139 / 0.419271 (0.013867) | 0.063240 / 0.043533 (0.019707) | 0.381175 / 0.255139 (0.126036) | 0.411686 / 0.283200 (0.128487) | 0.104843 / 0.141683 (-0.036840) | 1.853582 / 1.452155 (0.401427) | 1.935644 / 1.492716 (0.442928) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.218969 / 0.018006 (0.200963) | 0.515011 / 0.000490 (0.514522) | 0.004017 / 0.000200 (0.003818) | 0.000097 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028975 / 0.037411 (-0.008437) | 0.125239 / 0.014526 (0.110713) | 0.131371 / 0.176557 (-0.045185) | 0.203864 / 0.737135 (-0.533271) | 0.140784 / 0.296338 (-0.155554) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.620701 / 0.215209 (0.405492) | 6.263557 / 2.077655 (4.185903) | 2.510058 / 1.504120 (1.005938) | 2.085892 / 1.541195 (0.544697) | 2.170362 / 1.468490 (0.701872) | 1.325600 / 4.584777 (-3.259177) | 5.583355 / 3.745712 (1.837642) | 5.092791 / 5.269862 (-0.177071) | 2.814766 / 4.565676 (-1.750911) | 0.153568 / 0.424275 (-0.270707) | 0.014850 / 0.007607 (0.007243) | 0.787011 / 0.226044 (0.560967) | 7.948813 / 2.268929 (5.679885) | 3.320831 / 55.444624 (-52.123793) | 2.526327 / 6.876477 (-4.350150) | 2.691651 / 2.142072 (0.549579) | 1.521199 / 4.805227 (-3.284028) | 0.269738 / 6.500664 (-6.230926) | 0.082959 / 0.075469 (0.007490) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.740056 / 1.841788 (-0.101732) | 17.699732 / 8.074308 (9.625424) | 22.450689 / 10.191392 (12.259297) | 0.229350 / 0.680424 (-0.451073) | 0.027486 / 0.534201 (-0.506715) | 0.536153 / 0.579283 (-0.043130) | 0.608166 / 0.434364 (0.173802) | 0.629144 / 0.540337 (0.088807) | 0.732671 / 1.386936 (-0.654265) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010147 / 0.011353 (-0.001206) | 0.006484 / 0.011008 (-0.004524) | 0.098664 / 0.038508 (0.060156) | 0.036400 / 0.023109 (0.013291) | 0.432895 / 0.275898 (0.156997) | 0.466433 / 0.323480 (0.142954) | 0.008102 / 0.007986 (0.000117) | 0.004554 / 0.004328 (0.000225) | 0.100466 / 0.004250 (0.096216) | 0.054066 / 0.037052 (0.017013) | 0.439177 / 0.258489 (0.180688) | 0.502907 / 0.293841 (0.209066) | 0.059210 / 0.128546 (-0.069336) | 0.020220 / 0.075646 (-0.055426) | 0.124671 / 0.419271 (-0.294600) | 0.064278 / 0.043533 (0.020746) | 0.435659 / 0.255139 (0.180520) | 0.459670 / 0.283200 (0.176471) | 0.115574 / 0.141683 (-0.026109) | 1.826360 / 1.452155 (0.374205) | 1.943199 / 1.492716 (0.450483) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.238463 / 0.018006 (0.220457) | 0.534889 / 0.000490 (0.534400) | 0.000404 / 0.000200 (0.000204) | 0.000092 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033210 / 0.037411 (-0.004201) | 0.133529 / 0.014526 (0.119003) | 0.143813 / 0.176557 (-0.032743) | 0.213079 / 0.737135 (-0.524056) | 0.148427 / 0.296338 (-0.147912) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.656819 / 0.215209 (0.441610) | 6.414860 / 2.077655 (4.337205) | 2.756182 / 1.504120 (1.252062) | 2.405268 / 1.541195 (0.864073) | 2.436418 / 1.468490 (0.967928) | 1.289828 / 4.584777 (-3.294949) | 5.572731 / 3.745712 (1.827018) | 3.185432 / 5.269862 (-2.084429) | 2.093220 / 4.565676 (-2.472457) | 0.144817 / 0.424275 (-0.279458) | 0.015674 / 0.007607 (0.008067) | 0.801238 / 0.226044 (0.575194) | 7.955925 / 2.268929 (5.686996) | 3.605670 / 55.444624 (-51.838955) | 2.837568 / 6.876477 (-4.038908) | 2.873848 / 2.142072 (0.731775) | 1.493512 / 4.805227 (-3.311715) | 0.266251 / 6.500664 (-6.234413) | 0.082417 / 0.075469 (0.006948) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.608685 / 1.841788 (-0.233103) | 18.587875 / 8.074308 (10.513567) | 21.786119 / 10.191392 (11.594727) | 0.261748 / 0.680424 (-0.418675) | 0.026228 / 0.534201 (-0.507973) | 0.553538 / 0.579283 (-0.025745) | 0.599780 / 0.434364 (0.165416) | 0.665663 / 0.540337 (0.125325) | 0.792785 / 1.386936 (-0.594151) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1520e017a9bb6f80e82a38b578213e418ad7e845 \"CML watermark\")\n" ]
"2023-03-26T20:09:37"
"2023-03-28T15:03:38"
"2023-03-28T14:54:17"
CONTRIBUTOR
null
Remove implementation-specific kwargs from `file_utils.fsspec_get` and `file_utils.fsspec_head`, instead allowing them to be passed down via `storage_options`. This fixes an issue where s3fs did not recognize a timeout arg as well as fixes an issue mentioned in https://github.com/huggingface/datasets/issues/5281 by allowing users to pass down `storage_options` all the way from `datasets.load_dataset` to support implementation-specific credentials Supports something like the following to provide credentials explicitly instead of relying on boto's methods of locating them ``` load_dataset(..., data_files=["s3://..."], storage_options={"profile": "..."}) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5673/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5673/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5673", "html_url": "https://github.com/huggingface/datasets/pull/5673", "diff_url": "https://github.com/huggingface/datasets/pull/5673.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5673.patch", "merged_at": "2023-03-28T14:54:17" }
true
https://api.github.com/repos/huggingface/datasets/issues/5672
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5672/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5672/comments
https://api.github.com/repos/huggingface/datasets/issues/5672/events
https://github.com/huggingface/datasets/issues/5672
1,641,005,322
I_kwDODunzps5hz8EK
5,672
Pushing dataset to hub crash
{ "login": "tzvc", "id": 14275989, "node_id": "MDQ6VXNlcjE0Mjc1OTg5", "avatar_url": "https://avatars.githubusercontent.com/u/14275989?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tzvc", "html_url": "https://github.com/tzvc", "followers_url": "https://api.github.com/users/tzvc/followers", "following_url": "https://api.github.com/users/tzvc/following{/other_user}", "gists_url": "https://api.github.com/users/tzvc/gists{/gist_id}", "starred_url": "https://api.github.com/users/tzvc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tzvc/subscriptions", "organizations_url": "https://api.github.com/users/tzvc/orgs", "repos_url": "https://api.github.com/users/tzvc/repos", "events_url": "https://api.github.com/users/tzvc/events{/privacy}", "received_events_url": "https://api.github.com/users/tzvc/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! It's been fixed by https://github.com/huggingface/datasets/pull/5598. We're doing a new release tomorrow with the fix and you'll be able to push your 100k images ;)\r\n\r\nBasically `push_to_hub` used to fail if the remote repository already exists and has a README.md without dataset_info in the YAML tags.\r\n\r\nIn the meantime you can install datasets from source", "Hi @lhoestq ,\r\n\r\nWhat version of datasets library fix this case? I am using the last `v2.10.1` and I get the same error.", "We just released 2.11 which includes a fix :)" ]
"2023-03-26T17:42:13"
"2023-03-30T08:11:05"
"2023-03-30T08:11:05"
NONE
null
### Describe the bug Uploading a dataset with `push_to_hub()` fails without error description. ### Steps to reproduce the bug Hey there, I've built a image dataset of 100k images + text pair as described here https://huggingface.co/docs/datasets/image_dataset#imagefolder Now I'm trying to push it to the hub but I'm running into issues. First, I tried doing it via git directly, I added all the files in git lfs and pushed but I got hit with an error saying huggingface only accept up to 10k files in a folder. So I'm now trying with the `push_to_hub()` func as follow: ```python from datasets import load_dataset import os dataset = load_dataset("imagefolder", data_dir="./data", split="train") dataset.push_to_hub("tzvc/organization-logos", token=os.environ.get('HF_TOKEN')) ``` But again, this produces an error: ``` Resolving data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 100212/100212 [00:00<00:00, 439108.61it/s] Downloading and preparing dataset imagefolder/default to /home/contact_theochampion/.cache/huggingface/datasets/imagefolder/default-20567ffc703aa314/0.0.0/37fbb85cc714a338bea574ac6c7d0b5be5aff46c1862c1989b20e0771199e93f... Downloading data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████| 100211/100211 [00:00<00:00, 149323.73it/s] Downloading data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 15947.92it/s] Extracting data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 2245.34it/s] Dataset imagefolder downloaded and prepared to /home/contact_theochampion/.cache/huggingface/datasets/imagefolder/default-20567ffc703aa314/0.0.0/37fbb85cc714a338bea574ac6c7d0b5be5aff46c1862c1989b20e0771199e93f. Subsequent calls will reuse this data. Resuming upload of the dataset shards. Pushing dataset shards to the dataset hub: 100%|██████████████████████████████████████████████████████████████████████████████████████████████| 14/14 [00:31<00:00, 2.24s/it] Downloading metadata: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 118/118 [00:00<00:00, 225kB/s] Traceback (most recent call last): File "/home/contact_theochampion/organization-logos/push_to_hub.py", line 5, in <module> dataset.push_to_hub("tzvc/organization-logos", token=os.environ.get('HF_TOKEN')) File "/home/contact_theochampion/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 5245, in push_to_hub repo_info = dataset_infos[next(iter(dataset_infos))] StopIteration ``` What could be happening here ? ### Expected behavior The dataset is pushed to the hub ### Environment info - `datasets` version: 2.10.1 - Platform: Linux-5.10.0-21-cloud-amd64-x86_64-with-glibc2.31 - Python version: 3.9.2 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5672/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5672/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5671
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5671/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5671/comments
https://api.github.com/repos/huggingface/datasets/issues/5671/events
https://github.com/huggingface/datasets/issues/5671
1,640,840,012
I_kwDODunzps5hzTtM
5,671
How to use `load_dataset('glue', 'cola')`
{ "login": "makinzm", "id": 40193664, "node_id": "MDQ6VXNlcjQwMTkzNjY0", "avatar_url": "https://avatars.githubusercontent.com/u/40193664?v=4", "gravatar_id": "", "url": "https://api.github.com/users/makinzm", "html_url": "https://github.com/makinzm", "followers_url": "https://api.github.com/users/makinzm/followers", "following_url": "https://api.github.com/users/makinzm/following{/other_user}", "gists_url": "https://api.github.com/users/makinzm/gists{/gist_id}", "starred_url": "https://api.github.com/users/makinzm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/makinzm/subscriptions", "organizations_url": "https://api.github.com/users/makinzm/orgs", "repos_url": "https://api.github.com/users/makinzm/repos", "events_url": "https://api.github.com/users/makinzm/events{/privacy}", "received_events_url": "https://api.github.com/users/makinzm/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Sounds like an issue with incompatible `transformers` dependencies versions.\r\n\r\nCan you try to update `transformers` ?\r\n\r\nEDIT: I checked the `transformers` dependencies and it seems like you need `tokenizers>=0.10.1,<0.11` with `transformers==4.5.1`\r\n\r\nEDIT2: this old version of `datasets` seems to import `transformers` but it's no longer the case, so you could also simply update `datasets` and `transformers` won't be imported", "Thank you for advising me to update these libraries versions.\r\n\r\nI can implement codes using `datasets==2.10.1` and `transformers==4.27.3`" ]
"2023-03-26T09:40:34"
"2023-03-28T07:43:44"
"2023-03-28T07:43:43"
NONE
null
### Describe the bug I'm new to use HuggingFace datasets but I cannot use `load_dataset('glue', 'cola')`. - I was stacked by the following problem: ```python from datasets import load_dataset cola_dataset = load_dataset('glue', 'cola') --------------------------------------------------------------------------- InvalidVersion Traceback (most recent call last) File <timed exec>:1 (Omit because of long error message) File /usr/local/lib/python3.8/site-packages/packaging/version.py:197, in Version.__init__(self, version) 195 match = self._regex.search(version) 196 if not match: --> 197 raise InvalidVersion(f"Invalid version: '{version}'") 199 # Store the parsed out pieces of the version 200 self._version = _Version( 201 epoch=int(match.group("epoch")) if match.group("epoch") else 0, 202 release=tuple(int(i) for i in match.group("release").split(".")), (...) 208 local=_parse_local_version(match.group("local")), 209 ) InvalidVersion: Invalid version: '0.10.1,<0.11' ``` - You can check this full error message in my repository: [MLOps-Basics/week_0_project_setup/experimental_notebooks/data_exploration.ipynb](https://github.com/makinzm/MLOps-Basics/blob/eabab4b837880607d9968d3fa687c70177b2affd/week_0_project_setup/experimental_notebooks/data_exploration.ipynb) ### Steps to reproduce the bug - This is my repository to reproduce: [MLOps-Basics/week_0_project_setup](https://github.com/makinzm/MLOps-Basics/tree/eabab4b837880607d9968d3fa687c70177b2affd/week_0_project_setup) 1. cd `/DockerImage` and command `docker build . -t week0` 2. cd `/` and command `docker-compose up` 3. Run `experimental_notebooks/data_exploration.ipynb` ---- Just to be sure, I wrote down Dockerfile and requirements.txt - Dockerfile ```Dockerfile FROM python:3.8 WORKDIR /root/working RUN apt-get update && \ apt-get install -y python3-dev python3-pip python3-venv && \ apt-get clean && \ rm -rf /var/lib/apt/lists/* COPY requirements.txt . RUN pip3 install --no-cache-dir jupyter notebook && pip install --no-cache-dir -r requirements.txt CMD ["bash"] ``` - requirements.txt ```txt pytorch-lightning==1.2.10 datasets==1.6.2 transformers==4.5.1 scikit-learn==0.24.2 ``` ### Expected behavior There is no bug to implement `load_dataset('glue', 'cola')` ### Environment info I already wrote it.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5671/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5671/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5670
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5670/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5670/comments
https://api.github.com/repos/huggingface/datasets/issues/5670/events
https://github.com/huggingface/datasets/issues/5670
1,640,607,045
I_kwDODunzps5hya1F
5,670
Unable to load multi class classification datasets
{ "login": "ysahil97", "id": 19690506, "node_id": "MDQ6VXNlcjE5NjkwNTA2", "avatar_url": "https://avatars.githubusercontent.com/u/19690506?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ysahil97", "html_url": "https://github.com/ysahil97", "followers_url": "https://api.github.com/users/ysahil97/followers", "following_url": "https://api.github.com/users/ysahil97/following{/other_user}", "gists_url": "https://api.github.com/users/ysahil97/gists{/gist_id}", "starred_url": "https://api.github.com/users/ysahil97/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ysahil97/subscriptions", "organizations_url": "https://api.github.com/users/ysahil97/orgs", "repos_url": "https://api.github.com/users/ysahil97/repos", "events_url": "https://api.github.com/users/ysahil97/events{/privacy}", "received_events_url": "https://api.github.com/users/ysahil97/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! This sounds related to https://github.com/huggingface/datasets/issues/5406\r\n\r\nUpdating `datasets` fixes the issue ;)", "Thanks @lhoestq!\r\n\r\nI'll close this issue now." ]
"2023-03-25T18:06:15"
"2023-03-27T22:54:56"
"2023-03-27T22:54:56"
NONE
null
### Describe the bug I've been playing around with huggingface library, mostly with `datasets` and wanted to download the multi class classification datasets to fine tune BERT on this task. ([link](https://huggingface.co/docs/transformers/training#train-with-pytorch-trainer)). While loading the dataset, I'm getting the following error snippet. ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[44], line 3 1 from datasets import load_dataset ----> 3 imdb_dataset = load_dataset("yelp_review_full") 4 imdb_dataset File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/load.py:1719, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1716 ignore_verifications = ignore_verifications or save_infos 1718 # Create a dataset builder -> 1719 builder_instance = load_dataset_builder( 1720 path=path, 1721 name=name, 1722 data_dir=data_dir, 1723 data_files=data_files, 1724 cache_dir=cache_dir, 1725 features=features, 1726 download_config=download_config, 1727 download_mode=download_mode, 1728 revision=revision, 1729 use_auth_token=use_auth_token, 1730 **config_kwargs, 1731 ) 1733 # Return iterable dataset in case of streaming 1734 if streaming: File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/load.py:1523, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs) 1520 raise ValueError(error_msg) 1522 # Instantiate the dataset builder -> 1523 builder_instance: DatasetBuilder = builder_cls( 1524 cache_dir=cache_dir, 1525 config_name=config_name, 1526 data_dir=data_dir, 1527 data_files=data_files, 1528 hash=hash, 1529 features=features, 1530 use_auth_token=use_auth_token, 1531 **builder_kwargs, 1532 **config_kwargs, 1533 ) 1535 return builder_instance File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/builder.py:1292, in GeneratorBasedBuilder.__init__(self, writer_batch_size, *args, **kwargs) 1291 def __init__(self, *args, writer_batch_size=None, **kwargs): -> 1292 super().__init__(*args, **kwargs) 1293 # Batch size used by the ArrowWriter 1294 # It defines the number of samples that are kept in memory before writing them 1295 # and also the length of the arrow chunks 1296 # None means that the ArrowWriter will use its default value 1297 self._writer_batch_size = writer_batch_size or self.DEFAULT_WRITER_BATCH_SIZE File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/builder.py:312, in DatasetBuilder.__init__(self, cache_dir, config_name, hash, base_path, info, features, use_auth_token, repo_id, data_files, data_dir, name, **config_kwargs) 309 # prepare info: DatasetInfo are a standardized dataclass across all datasets 310 # Prefill datasetinfo 311 if info is None: --> 312 info = self.get_exported_dataset_info() 313 info.update(self._info()) 314 info.builder_name = self.name File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/builder.py:412, in DatasetBuilder.get_exported_dataset_info(self) 400 def get_exported_dataset_info(self) -> DatasetInfo: 401 """Empty DatasetInfo if doesn't exist 402 403 Example: (...) 410 ``` 411 """ --> 412 return self.get_all_exported_dataset_infos().get(self.config.name, DatasetInfo()) File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/builder.py:398, in DatasetBuilder.get_all_exported_dataset_infos(cls) 385 @classmethod 386 def get_all_exported_dataset_infos(cls) -> DatasetInfosDict: 387 """Empty dict if doesn't exist 388 389 Example: (...) 396 ``` 397 """ --> 398 return DatasetInfosDict.from_directory(cls.get_imported_module_dir()) File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/info.py:370, in DatasetInfosDict.from_directory(cls, dataset_infos_dir) 368 dataset_metadata = DatasetMetadata.from_readme(Path(dataset_infos_dir) / "README.md") 369 if "dataset_info" in dataset_metadata: --> 370 return cls.from_metadata(dataset_metadata) 371 if os.path.exists(os.path.join(dataset_infos_dir, config.DATASETDICT_INFOS_FILENAME)): 372 # this is just to have backward compatibility with dataset_infos.json files 373 with open(os.path.join(dataset_infos_dir, config.DATASETDICT_INFOS_FILENAME), encoding="utf-8") as f: File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/info.py:396, in DatasetInfosDict.from_metadata(cls, dataset_metadata) 387 return cls( 388 { 389 dataset_info_yaml_dict.get("config_name", "default"): DatasetInfo._from_yaml_dict( (...) 393 } 394 ) 395 else: --> 396 dataset_info = DatasetInfo._from_yaml_dict(dataset_metadata["dataset_info"]) 397 dataset_info.config_name = dataset_metadata["dataset_info"].get("config_name", "default") 398 return cls({dataset_info.config_name: dataset_info}) File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/info.py:332, in DatasetInfo._from_yaml_dict(cls, yaml_data) 330 yaml_data = copy.deepcopy(yaml_data) 331 if yaml_data.get("features") is not None: --> 332 yaml_data["features"] = Features._from_yaml_list(yaml_data["features"]) 333 if yaml_data.get("splits") is not None: 334 yaml_data["splits"] = SplitDict._from_yaml_list(yaml_data["splits"]) File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1745, in Features._from_yaml_list(cls, yaml_data) 1742 else: 1743 raise TypeError(f"Expected a dict or a list but got {type(obj)}: {obj}") -> 1745 return cls.from_dict(from_yaml_inner(yaml_data)) File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1741, in Features._from_yaml_list.<locals>.from_yaml_inner(obj) 1739 elif isinstance(obj, list): 1740 names = [_feature.pop("name") for _feature in obj] -> 1741 return {name: from_yaml_inner(_feature) for name, _feature in zip(names, obj)} 1742 else: 1743 raise TypeError(f"Expected a dict or a list but got {type(obj)}: {obj}") File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1741, in <dictcomp>(.0) 1739 elif isinstance(obj, list): 1740 names = [_feature.pop("name") for _feature in obj] -> 1741 return {name: from_yaml_inner(_feature) for name, _feature in zip(names, obj)} 1742 else: 1743 raise TypeError(f"Expected a dict or a list but got {type(obj)}: {obj}") File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1736, in Features._from_yaml_list.<locals>.from_yaml_inner(obj) 1734 return {"_type": snakecase_to_camelcase(obj["dtype"])} 1735 else: -> 1736 return from_yaml_inner(obj["dtype"]) 1737 else: 1738 return {"_type": snakecase_to_camelcase(_type), **unsimplify(obj)[_type]} File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1738, in Features._from_yaml_list.<locals>.from_yaml_inner(obj) 1736 return from_yaml_inner(obj["dtype"]) 1737 else: -> 1738 return {"_type": snakecase_to_camelcase(_type), **unsimplify(obj)[_type]} 1739 elif isinstance(obj, list): 1740 names = [_feature.pop("name") for _feature in obj] File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1706, in Features._from_yaml_list.<locals>.unsimplify(feature) 1704 if isinstance(feature.get("class_label"), dict) and isinstance(feature["class_label"].get("names"), dict): 1705 label_ids = sorted(feature["class_label"]["names"]) -> 1706 if label_ids and label_ids != list(range(label_ids[-1] + 1)): 1707 raise ValueError( 1708 f"ClassLabel expected a value for all label ids [0:{label_ids[-1] + 1}] but some ids are missing." 1709 ) 1710 feature["class_label"]["names"] = [feature["class_label"]["names"][label_id] for label_id in label_ids] TypeError: can only concatenate str (not "int") to str ``` The same issue happens when I try to load `go-emotions` multi class classification dataset. Could somebody guide me on how to fix this issue? ### Steps to reproduce the bug Run the following code snippet in a python script/ notebook cell: ``` from datasets import load_dataset yelp_dataset = load_dataset("yelp_review_full") yelp_dataset ``` ### Expected behavior The dataset should be loaded perfectly, which showing the train, test and unsupervised splits with the basic data statistics ### Environment info - `datasets` version: 2.6.1 - Platform: Linux-5.4.0-124-generic-x86_64-with-glibc2.31 - Python version: 3.10.9 - PyArrow version: 8.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5670/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5670/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5669
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5669/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5669/comments
https://api.github.com/repos/huggingface/datasets/issues/5669/events
https://github.com/huggingface/datasets/issues/5669
1,638,070,046
I_kwDODunzps5hovce
5,669
Almost identical datasets, huge performance difference
{ "login": "eli-osherovich", "id": 2437102, "node_id": "MDQ6VXNlcjI0MzcxMDI=", "avatar_url": "https://avatars.githubusercontent.com/u/2437102?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eli-osherovich", "html_url": "https://github.com/eli-osherovich", "followers_url": "https://api.github.com/users/eli-osherovich/followers", "following_url": "https://api.github.com/users/eli-osherovich/following{/other_user}", "gists_url": "https://api.github.com/users/eli-osherovich/gists{/gist_id}", "starred_url": "https://api.github.com/users/eli-osherovich/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eli-osherovich/subscriptions", "organizations_url": "https://api.github.com/users/eli-osherovich/orgs", "repos_url": "https://api.github.com/users/eli-osherovich/repos", "events_url": "https://api.github.com/users/eli-osherovich/events{/privacy}", "received_events_url": "https://api.github.com/users/eli-osherovich/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Do I miss something here?", "Hi! \r\n\r\nThe first dataset stores images as bytes (the \"image\" column type is `datasets.Image()`) and decodes them as `PIL.Image` objects and the second dataset stores them as variable-length lists (the \"image\" column type is `datasets.Sequence(...)`)), so I guess going from `arrow bytes -> NumPy -> decoding as PIL.Image -> PyTorch` is faster than going from `arrow list -> NumPy -> PyTorch`. \r\n\r\nTo store image bytes in the second example, you can do the following:\r\n\r\n```python\r\ndef transform(example):\r\n example[\"image2\"] = cv2.imread(example[\"image_file_path\"])\r\n return example\r\n\r\nfeatures = dataset.features.copy()\r\ndel features[\"image\"]\r\nfeatures[\"image2\"] = datasets.Image()\r\ndataset2 = dataset.map(transform, remove_columns=[\"image\"], features=features)\r\n\r\nfor x in DataLoader(dataset2.with_format(\"torch\"), batch_size=16, shuffle=True, num_workers=8):\r\n pass\r\n```", "Thanks, @mariosasko. I could not understand why a (decoded) sequence should be MUCH slower than an encoded image (that must be decoded every time). At any rate, I tried you suggestion. It made the `map` step to run extremely slow (consumes all the 16GB of memory and starts swapping)\r\n\r\nI tried also the easiest (as I see it) scenario, where images are kept as bytes, but it made things even worse: not only it was extremely slow, but also crashes\r\n\r\n```python\r\n\r\ndef transform(example):\r\n example[\"image2\"] = cv2.imread(example[\"image_file_path\"]).tobytes()\r\n return example\r\n\r\ndataset2 = dataset.map(transform, remove_columns=[\"image\"])\r\n\r\nfor x in DataLoader(dataset2.with_format(\"torch\"), batch_size=16, shuffle=True, num_workers=8):\r\n pass\r\n\r\n\r\nResource temporarily unavailable (src/thread.cpp:269)\r\nOutput exceeds the size limit. Open the full output data in a text editor\r\n---------------------------------------------------------------------------\r\nRuntimeError Traceback (most recent call last)\r\nFile ~/virtenvs/py310/lib/python3.10/site-packages/torch/utils/data/dataloader.py:1133, in _MultiProcessingDataLoaderIter._try_get_data(self, timeout)\r\n 1132 try:\r\n-> 1133 data = self._data_queue.get(timeout=timeout)\r\n 1134 return (True, data)\r\n\r\nFile ~/virtenvs/py310/lib/python3.10/multiprocessing/queues.py:113, in Queue.get(self, block, timeout)\r\n 112 timeout = deadline - time.monotonic()\r\n--> 113 if not self._poll(timeout):\r\n 114 raise Empty\r\n\r\nFile ~/virtenvs/py310/lib/python3.10/multiprocessing/connection.py:257, in _ConnectionBase.poll(self, timeout)\r\n 256 self._check_readable()\r\n--> 257 return self._poll(timeout)\r\n\r\nFile ~/virtenvs/py310/lib/python3.10/multiprocessing/connection.py:424, in Connection._poll(self, timeout)\r\n 423 def _poll(self, timeout):\r\n--> 424 r = wait([self], timeout)\r\n 425 return bool(r)\r\n\r\nFile ~/virtenvs/py310/lib/python3.10/multiprocessing/connection.py:931, in wait(object_list, timeout)\r\n 930 while True:\r\n--> 931 ready = selector.select(timeout)\r\n 932 if ready:\r\n...\r\n-> 1146 raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) from e\r\n 1147 if isinstance(e, queue.Empty):\r\n 1148 return (False, None)\r\n\r\nRuntimeError: DataLoader worker (pid(s) 195393) exited unexpectedly\r\nResource temporarily unavailable (src/thread.cpp:269)\r\nResource temporarily unavailable (src/thread.cpp:269)\r\nResource temporarily unavailable (src/thread.cpp:269)\r\nResource temporarily unavailable (src/thread.cpp:269)\r\nResource temporarily unavailable (src/thread.cpp:269)\r\n```\r\n", "Correction: the `beans` dataset stores the image file paths, not the bytes.\r\n\r\nFor your use case, I think it makes more sense to use `with_tranform` than `map` and lazily decode images with `cv2.imread` when indexing an example/batch:\r\n```python\r\nimport cv2\r\n\r\ndef transform(batch):\r\n batch[\"image2\"] = np.stack([cv2.imread(image_file_path) for image_file_path in batch[\"image_file_path\"]])\r\n return batch\r\n\r\ndataset = dataset.with_transform(transform)\r\n```\r\n", "This is incorrect.\n\nDid you try to run it? dataset[0] returns a tensor of numbers. dataset2[0]\nreturns the same tensor, but after a few long seconds. Looping over a\nthousand of images cannot take 15 minutes.\n\nOn Fri, 24 Mar 2023 at 19:28 Mario Šaško ***@***.***> wrote:\n\n> Correction: the beans dataset stores the image file paths, not the bytes.\n>\n> For your use case, I think it makes more sense to use with_tranform than\n> map and lazily decode images with cv2.imread when accessing an\n> example/batch:\n>\n> import cv2\n> def transform(batch):\n> batch[\"image2\"] = np.stack([cv2.imread(image_file_path) for image_file_path in batch[\"image_file_path\"]])\n> return batch\n> dataset = dataset.with_transform(transform)\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/5669#issuecomment-1483084347>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AASS73SHRWXIQX6SCYCJ7ITW5XDUDANCNFSM6AAAAAAWFSHWEM>\n> .\n> You are receiving this because you authored the thread.Message ID:\n> ***@***.***>\n>\n", "I updated the transform with the NumPy -> PyTorch conversion.\r\n\r\nI'm sharing the entire code:\r\n```python\r\nimport cv2\r\nimport numpy as np\r\nimport datasets\r\nimport torch\r\nfrom datasets import load_dataset\r\nfrom torch.utils.data import DataLoader\r\n\r\ndataset = load_dataset(\"beans\", split=\"train\")\r\n\r\ndef transform(batch):\r\n # # Pillow decodes as RGB\r\n # batch[\"image\"] = torch.stack([torch.from_numpy(cv2.cvtColor(cv2.imread(image_file_path), cv2.COLOR_BGR2RGB)) for image_file_path in batch[\"image_file_path\"]])\r\n batch[\"image\"] = torch.stack([torch.from_numpy(cv2.imread(image_file_path)) for image_file_path in batch[\"image_file_path\"]])\r\n batch[\"labels\"] = torch.tensor(batch[\"labels\"])\r\n return batch\r\n\r\ndataset2 = dataset.cast_column(\"image\", datasets.Image(decode=False)).with_transform(transform)\r\n\r\nfor x in DataLoader(dataset2, batch_size=16, shuffle=True, num_workers=8):\r\n pass\r\n```\r\n\r\nThis code is ≈ 10% faster on my machine than the default decoding with Pillow and `.with_format(\"torch\")`.", "Thanks, @mariosasko \r\nMy question remain unanswered though. Why is the `map`ed dataset so slow? My understanding is that a dataset of numpy arrays should be must faster than a dataset that has to decode images into numpy arrays every time one accesses an item. " ]
"2023-03-23T18:20:20"
"2023-04-09T18:56:23"
null
CONTRIBUTOR
null
### Describe the bug I am struggling to understand (huge) performance difference between two datasets that are almost identical. ### Steps to reproduce the bug # Fast (normal) dataset speed: ```python import cv2 from datasets import load_dataset from torch.utils.data import DataLoader dataset = load_dataset("beans", split="train") for x in DataLoader(dataset.with_format("torch"), batch_size=16, shuffle=True, num_workers=8): pass ``` The above pass over the dataset takes about 1.5 seconds on my computer. However, if I re-create (almost) the same dataset, the sweep takes HUGE amount of time: 15 minutes. Steps to reproduce: ```python def transform(example): example["image2"] = cv2.imread(example["image_file_path"]) return example dataset2 = dataset.map(transform, remove_columns=["image"]) for x in DataLoader(dataset2.with_format("torch"), batch_size=16, shuffle=True, num_workers=8): pass ``` ### Expected behavior Same timings ### Environment info python==3.10.9 datasets==2.10.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5669/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5669/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5668
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5668/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5668/comments
https://api.github.com/repos/huggingface/datasets/issues/5668/events
https://github.com/huggingface/datasets/pull/5668
1,638,018,598
PR_kwDODunzps5MwuIp
5,668
Support for downloading only provided split
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5668). All of your documentation changes will be reflected on that endpoint.", "My previous comment didn't create the retro-link in the PR. I write it here again.\r\n\r\nYou can check the context and the discussions we had about this feature enhancement in this PR:\r\n- #2249" ]
"2023-03-23T17:53:39"
"2023-03-24T06:43:14"
null
CONTRIBUTOR
null
We can pass split to `_split_generators()`. But I'm not sure if it's possible to solve cache issues, mostly with `dataset_info.json`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5668/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5668/timeline
null
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5668", "html_url": "https://github.com/huggingface/datasets/pull/5668", "diff_url": "https://github.com/huggingface/datasets/pull/5668.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5668.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/5667
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5667/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5667/comments
https://api.github.com/repos/huggingface/datasets/issues/5667/events
https://github.com/huggingface/datasets/pull/5667
1,637,789,361
PR_kwDODunzps5Mv8Im
5,667
Jax requires jaxlib
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008592 / 0.011353 (-0.002761) | 0.005182 / 0.011008 (-0.005826) | 0.097916 / 0.038508 (0.059408) | 0.034612 / 0.023109 (0.011503) | 0.313760 / 0.275898 (0.037862) | 0.353422 / 0.323480 (0.029942) | 0.005880 / 0.007986 (-0.002106) | 0.004123 / 0.004328 (-0.000205) | 0.073634 / 0.004250 (0.069384) | 0.049349 / 0.037052 (0.012297) | 0.317381 / 0.258489 (0.058892) | 0.365821 / 0.293841 (0.071980) | 0.036482 / 0.128546 (-0.092065) | 0.012126 / 0.075646 (-0.063521) | 0.334640 / 0.419271 (-0.084631) | 0.050551 / 0.043533 (0.007018) | 0.310472 / 0.255139 (0.055333) | 0.349049 / 0.283200 (0.065850) | 0.101343 / 0.141683 (-0.040340) | 1.447903 / 1.452155 (-0.004252) | 1.518793 / 1.492716 (0.026077) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.210971 / 0.018006 (0.192965) | 0.449471 / 0.000490 (0.448982) | 0.003596 / 0.000200 (0.003396) | 0.000084 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027386 / 0.037411 (-0.010025) | 0.112683 / 0.014526 (0.098157) | 0.117603 / 0.176557 (-0.058954) | 0.174186 / 0.737135 (-0.562949) | 0.123510 / 0.296338 (-0.172829) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422595 / 0.215209 (0.207386) | 4.224713 / 2.077655 (2.147058) | 2.006359 / 1.504120 (0.502240) | 1.823767 / 1.541195 (0.282572) | 1.898340 / 1.468490 (0.429849) | 0.721656 / 4.584777 (-3.863121) | 3.823498 / 3.745712 (0.077785) | 2.172380 / 5.269862 (-3.097481) | 1.469773 / 4.565676 (-3.095904) | 0.086978 / 0.424275 (-0.337297) | 0.012642 / 0.007607 (0.005035) | 0.517830 / 0.226044 (0.291785) | 5.171150 / 2.268929 (2.902221) | 2.495238 / 55.444624 (-52.949386) | 2.114380 / 6.876477 (-4.762097) | 2.274329 / 2.142072 (0.132257) | 0.863855 / 4.805227 (-3.941372) | 0.174127 / 6.500664 (-6.326537) | 0.065939 / 0.075469 (-0.009530) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.208831 / 1.841788 (-0.632957) | 15.016704 / 8.074308 (6.942396) | 14.721231 / 10.191392 (4.529839) | 0.144140 / 0.680424 (-0.536284) | 0.017781 / 0.534201 (-0.516420) | 0.425679 / 0.579283 (-0.153604) | 0.416747 / 0.434364 (-0.017617) | 0.490160 / 0.540337 (-0.050177) | 0.583639 / 1.386936 (-0.803297) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007670 / 0.011353 (-0.003683) | 0.005383 / 0.011008 (-0.005626) | 0.075756 / 0.038508 (0.037248) | 0.033373 / 0.023109 (0.010263) | 0.341017 / 0.275898 (0.065119) | 0.378890 / 0.323480 (0.055410) | 0.005945 / 0.007986 (-0.002040) | 0.004179 / 0.004328 (-0.000150) | 0.074588 / 0.004250 (0.070337) | 0.048564 / 0.037052 (0.011511) | 0.338774 / 0.258489 (0.080285) | 0.391081 / 0.293841 (0.097240) | 0.036659 / 0.128546 (-0.091887) | 0.012241 / 0.075646 (-0.063406) | 0.086910 / 0.419271 (-0.332361) | 0.049745 / 0.043533 (0.006212) | 0.332810 / 0.255139 (0.077671) | 0.360317 / 0.283200 (0.077117) | 0.103399 / 0.141683 (-0.038283) | 1.456754 / 1.452155 (0.004599) | 1.542644 / 1.492716 (0.049928) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207182 / 0.018006 (0.189176) | 0.455659 / 0.000490 (0.455169) | 0.003609 / 0.000200 (0.003409) | 0.000092 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029556 / 0.037411 (-0.007856) | 0.114215 / 0.014526 (0.099690) | 0.127721 / 0.176557 (-0.048836) | 0.177070 / 0.737135 (-0.560065) | 0.128840 / 0.296338 (-0.167499) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.428176 / 0.215209 (0.212967) | 4.274324 / 2.077655 (2.196669) | 2.020058 / 1.504120 (0.515938) | 1.823343 / 1.541195 (0.282148) | 1.924688 / 1.468490 (0.456198) | 0.719195 / 4.584777 (-3.865582) | 3.760445 / 3.745712 (0.014733) | 2.133813 / 5.269862 (-3.136049) | 1.364876 / 4.565676 (-3.200801) | 0.087523 / 0.424275 (-0.336752) | 0.013712 / 0.007607 (0.006105) | 0.528403 / 0.226044 (0.302359) | 5.307780 / 2.268929 (3.038851) | 2.496747 / 55.444624 (-52.947877) | 2.169136 / 6.876477 (-4.707341) | 2.235719 / 2.142072 (0.093646) | 0.875281 / 4.805227 (-3.929946) | 0.172369 / 6.500664 (-6.328295) | 0.064667 / 0.075469 (-0.010802) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.262594 / 1.841788 (-0.579193) | 15.182681 / 8.074308 (7.108373) | 14.725663 / 10.191392 (4.534271) | 0.180961 / 0.680424 (-0.499462) | 0.017632 / 0.534201 (-0.516569) | 0.427531 / 0.579283 (-0.151752) | 0.431741 / 0.434364 (-0.002622) | 0.503251 / 0.540337 (-0.037087) | 0.597423 / 1.386936 (-0.789513) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f4cf224dcb1043a272971ed331a214cf65c504be \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009761 / 0.011353 (-0.001592) | 0.006779 / 0.011008 (-0.004229) | 0.132786 / 0.038508 (0.094277) | 0.037721 / 0.023109 (0.014611) | 0.435685 / 0.275898 (0.159787) | 0.447488 / 0.323480 (0.124009) | 0.006848 / 0.007986 (-0.001137) | 0.005099 / 0.004328 (0.000771) | 0.097384 / 0.004250 (0.093133) | 0.056663 / 0.037052 (0.019610) | 0.463407 / 0.258489 (0.204918) | 0.502544 / 0.293841 (0.208703) | 0.053817 / 0.128546 (-0.074729) | 0.020253 / 0.075646 (-0.055393) | 0.446653 / 0.419271 (0.027382) | 0.064465 / 0.043533 (0.020932) | 0.455375 / 0.255139 (0.200236) | 0.458378 / 0.283200 (0.175178) | 0.109124 / 0.141683 (-0.032559) | 1.957338 / 1.452155 (0.505184) | 1.960391 / 1.492716 (0.467674) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219566 / 0.018006 (0.201560) | 0.558181 / 0.000490 (0.557691) | 0.004678 / 0.000200 (0.004478) | 0.000125 / 0.000054 (0.000071) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032643 / 0.037411 (-0.004768) | 0.147375 / 0.014526 (0.132849) | 0.130821 / 0.176557 (-0.045736) | 0.203202 / 0.737135 (-0.533933) | 0.145186 / 0.296338 (-0.151153) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.665773 / 0.215209 (0.450564) | 6.674021 / 2.077655 (4.596366) | 2.662372 / 1.504120 (1.158253) | 2.333327 / 1.541195 (0.792132) | 2.221413 / 1.468490 (0.752923) | 1.287001 / 4.584777 (-3.297776) | 5.534326 / 3.745712 (1.788614) | 3.188809 / 5.269862 (-2.081052) | 2.261717 / 4.565676 (-2.303960) | 0.151910 / 0.424275 (-0.272366) | 0.020509 / 0.007607 (0.012902) | 0.863608 / 0.226044 (0.637564) | 8.442155 / 2.268929 (6.173227) | 3.438260 / 55.444624 (-52.006364) | 2.692503 / 6.876477 (-4.183974) | 2.810997 / 2.142072 (0.668925) | 1.477345 / 4.805227 (-3.327882) | 0.261942 / 6.500664 (-6.238722) | 0.086347 / 0.075469 (0.010878) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.529072 / 1.841788 (-0.312716) | 17.213019 / 8.074308 (9.138711) | 21.887309 / 10.191392 (11.695917) | 0.259660 / 0.680424 (-0.420763) | 0.027916 / 0.534201 (-0.506285) | 0.554103 / 0.579283 (-0.025180) | 0.614566 / 0.434364 (0.180202) | 0.700456 / 0.540337 (0.160119) | 0.756860 / 1.386936 (-0.630077) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009267 / 0.011353 (-0.002086) | 0.006414 / 0.011008 (-0.004594) | 0.102404 / 0.038508 (0.063896) | 0.034885 / 0.023109 (0.011776) | 0.413191 / 0.275898 (0.137293) | 0.483901 / 0.323480 (0.160422) | 0.006614 / 0.007986 (-0.001372) | 0.004608 / 0.004328 (0.000280) | 0.096717 / 0.004250 (0.092467) | 0.055123 / 0.037052 (0.018071) | 0.417786 / 0.258489 (0.159297) | 0.490886 / 0.293841 (0.197045) | 0.056951 / 0.128546 (-0.071595) | 0.021073 / 0.075646 (-0.054574) | 0.116576 / 0.419271 (-0.302695) | 0.063968 / 0.043533 (0.020435) | 0.420495 / 0.255139 (0.165356) | 0.449667 / 0.283200 (0.166467) | 0.115318 / 0.141683 (-0.026365) | 1.899398 / 1.452155 (0.447243) | 1.992175 / 1.492716 (0.499459) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.233076 / 0.018006 (0.215070) | 0.518377 / 0.000490 (0.517887) | 0.000809 / 0.000200 (0.000609) | 0.000101 / 0.000054 (0.000047) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030951 / 0.037411 (-0.006460) | 0.134940 / 0.014526 (0.120414) | 0.147789 / 0.176557 (-0.028767) | 0.205854 / 0.737135 (-0.531281) | 0.146726 / 0.296338 (-0.149613) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.648006 / 0.215209 (0.432797) | 6.416688 / 2.077655 (4.339033) | 2.696462 / 1.504120 (1.192342) | 2.293071 / 1.541195 (0.751877) | 2.319426 / 1.468490 (0.850935) | 1.332398 / 4.584777 (-3.252379) | 5.706956 / 3.745712 (1.961244) | 4.464473 / 5.269862 (-0.805388) | 2.817364 / 4.565676 (-1.748312) | 0.157595 / 0.424275 (-0.266680) | 0.015721 / 0.007607 (0.008114) | 0.806055 / 0.226044 (0.580010) | 7.927795 / 2.268929 (5.658866) | 3.461251 / 55.444624 (-51.983373) | 2.664466 / 6.876477 (-4.212010) | 2.660041 / 2.142072 (0.517968) | 1.531135 / 4.805227 (-3.274092) | 0.260293 / 6.500664 (-6.240371) | 0.077440 / 0.075469 (0.001971) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.687325 / 1.841788 (-0.154463) | 17.905080 / 8.074308 (9.830772) | 21.046794 / 10.191392 (10.855402) | 0.245335 / 0.680424 (-0.435089) | 0.026830 / 0.534201 (-0.507371) | 0.510798 / 0.579283 (-0.068485) | 0.590041 / 0.434364 (0.155677) | 0.607440 / 0.540337 (0.067102) | 0.725030 / 1.386936 (-0.661906) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#91dcb3636e410a249177f5e0508ed101ad7ee25b \"CML watermark\")\n", "I self-assigned #5666 and I was working on it... without success: https://github.com/huggingface/datasets/tree/fix-5666\r\n\r\nI think your approach is the right one because installation of jax is not trivial...\r\n\r\nNext time it would be better that you self-assign an issue before working on it, so that we avoid duplicate work... :sweat_smile: ", "Oh sorry I forgot to self assign this time", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008436 / 0.011353 (-0.002917) | 0.005702 / 0.011008 (-0.005306) | 0.113518 / 0.038508 (0.075010) | 0.039639 / 0.023109 (0.016530) | 0.353200 / 0.275898 (0.077302) | 0.382428 / 0.323480 (0.058948) | 0.007419 / 0.007986 (-0.000566) | 0.005640 / 0.004328 (0.001311) | 0.083905 / 0.004250 (0.079655) | 0.053258 / 0.037052 (0.016205) | 0.371069 / 0.258489 (0.112580) | 0.390439 / 0.293841 (0.096598) | 0.042679 / 0.128546 (-0.085867) | 0.013438 / 0.075646 (-0.062208) | 0.390116 / 0.419271 (-0.029155) | 0.068782 / 0.043533 (0.025249) | 0.352620 / 0.255139 (0.097481) | 0.371939 / 0.283200 (0.088739) | 0.126157 / 0.141683 (-0.015525) | 1.694638 / 1.452155 (0.242484) | 1.799211 / 1.492716 (0.306495) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.260099 / 0.018006 (0.242092) | 0.489852 / 0.000490 (0.489362) | 0.012549 / 0.000200 (0.012349) | 0.000275 / 0.000054 (0.000221) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032235 / 0.037411 (-0.005177) | 0.125325 / 0.014526 (0.110799) | 0.137242 / 0.176557 (-0.039315) | 0.206566 / 0.737135 (-0.530570) | 0.143260 / 0.296338 (-0.153078) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.478510 / 0.215209 (0.263301) | 4.746439 / 2.077655 (2.668784) | 2.195072 / 1.504120 (0.690952) | 1.958163 / 1.541195 (0.416969) | 2.028566 / 1.468490 (0.560075) | 0.821289 / 4.584777 (-3.763488) | 4.765529 / 3.745712 (1.019817) | 2.378753 / 5.269862 (-2.891108) | 1.514776 / 4.565676 (-3.050900) | 0.100673 / 0.424275 (-0.323602) | 0.014720 / 0.007607 (0.007113) | 0.606388 / 0.226044 (0.380343) | 5.975285 / 2.268929 (3.706357) | 2.866762 / 55.444624 (-52.577862) | 2.392132 / 6.876477 (-4.484345) | 2.546487 / 2.142072 (0.404415) | 0.982394 / 4.805227 (-3.822833) | 0.201195 / 6.500664 (-6.299469) | 0.077781 / 0.075469 (0.002312) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.420613 / 1.841788 (-0.421174) | 17.743030 / 8.074308 (9.668722) | 16.752344 / 10.191392 (6.560951) | 0.167464 / 0.680424 (-0.512960) | 0.020908 / 0.534201 (-0.513293) | 0.502919 / 0.579283 (-0.076364) | 0.506375 / 0.434364 (0.072011) | 0.602695 / 0.540337 (0.062358) | 0.689398 / 1.386936 (-0.697538) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008713 / 0.011353 (-0.002640) | 0.006152 / 0.011008 (-0.004856) | 0.091264 / 0.038508 (0.052756) | 0.040284 / 0.023109 (0.017174) | 0.417598 / 0.275898 (0.141700) | 0.460141 / 0.323480 (0.136661) | 0.006589 / 0.007986 (-0.001397) | 0.004671 / 0.004328 (0.000343) | 0.089360 / 0.004250 (0.085110) | 0.055113 / 0.037052 (0.018061) | 0.415241 / 0.258489 (0.156752) | 0.470566 / 0.293841 (0.176725) | 0.042963 / 0.128546 (-0.085584) | 0.014421 / 0.075646 (-0.061225) | 0.106333 / 0.419271 (-0.312939) | 0.057810 / 0.043533 (0.014277) | 0.417889 / 0.255139 (0.162750) | 0.444236 / 0.283200 (0.161036) | 0.119508 / 0.141683 (-0.022175) | 1.736209 / 1.452155 (0.284055) | 1.790319 / 1.492716 (0.297602) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219184 / 0.018006 (0.201178) | 0.493931 / 0.000490 (0.493441) | 0.006727 / 0.000200 (0.006527) | 0.000103 / 0.000054 (0.000049) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034415 / 0.037411 (-0.002996) | 0.132165 / 0.014526 (0.117639) | 0.143138 / 0.176557 (-0.033418) | 0.200052 / 0.737135 (-0.537083) | 0.148906 / 0.296338 (-0.147433) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.483686 / 0.215209 (0.268476) | 4.849874 / 2.077655 (2.772220) | 2.374276 / 1.504120 (0.870156) | 2.168334 / 1.541195 (0.627139) | 2.285983 / 1.468490 (0.817493) | 0.833041 / 4.584777 (-3.751735) | 4.665915 / 3.745712 (0.920203) | 4.543559 / 5.269862 (-0.726302) | 2.246926 / 4.565676 (-2.318750) | 0.098490 / 0.424275 (-0.325785) | 0.014934 / 0.007607 (0.007327) | 0.591878 / 0.226044 (0.365834) | 6.039852 / 2.268929 (3.770923) | 2.881244 / 55.444624 (-52.563381) | 2.486297 / 6.876477 (-4.390179) | 2.564642 / 2.142072 (0.422569) | 0.985684 / 4.805227 (-3.819543) | 0.199101 / 6.500664 (-6.301563) | 0.078138 / 0.075469 (0.002669) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.647744 / 1.841788 (-0.194043) | 18.986464 / 8.074308 (10.912156) | 17.246575 / 10.191392 (7.055183) | 0.219151 / 0.680424 (-0.461273) | 0.022219 / 0.534201 (-0.511982) | 0.547207 / 0.579283 (-0.032076) | 0.525943 / 0.434364 (0.091579) | 0.616909 / 0.540337 (0.076572) | 0.757423 / 1.386936 (-0.629513) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f423b69cd4371bd03bb819c60450534f8850ad61 \"CML watermark\")\n" ]
"2023-03-23T15:41:09"
"2023-03-23T16:23:11"
"2023-03-23T16:14:52"
MEMBER
null
close https://github.com/huggingface/datasets/issues/5666
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5667/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5667/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5667", "html_url": "https://github.com/huggingface/datasets/pull/5667", "diff_url": "https://github.com/huggingface/datasets/pull/5667.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5667.patch", "merged_at": "2023-03-23T16:14:52" }
true
https://api.github.com/repos/huggingface/datasets/issues/5666
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5666/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5666/comments
https://api.github.com/repos/huggingface/datasets/issues/5666/events
https://github.com/huggingface/datasets/issues/5666
1,637,675,062
I_kwDODunzps5hnPA2
5,666
Support tensorflow 2.12.0 in CI
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
"2023-03-23T14:37:51"
"2023-03-23T16:14:54"
"2023-03-23T16:14:54"
MEMBER
null
Once we find out the root cause of: - #5663 we should revert the temporary pin on tensorflow introduced by: - #5664
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5666/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5666/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5665
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5665/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5665/comments
https://api.github.com/repos/huggingface/datasets/issues/5665/events
https://github.com/huggingface/datasets/issues/5665
1,637,193,648
I_kwDODunzps5hlZew
5,665
Feature request: IterableDataset.push_to_hub
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
"2023-03-23T09:53:04"
"2023-03-23T09:53:16"
null
CONTRIBUTOR
null
### Feature request It'd be great to have a lazy push to hub, similar to the lazy loading we have with `IterableDataset`. Suppose you'd like to filter [LAION](https://huggingface.co/datasets/laion/laion400m) based on certain conditions, but as LAION doesn't fit into your disk, you'd like to leverage streaming: ``` from datasets import load_dataset dataset = load_dataset("laion/laion400m", streaming=True, split="train") ``` Then you could filter the dataset based on certain conditions: ``` filtered_dataset = dataset.filter(lambda example: example['HEIGHT'] > 400) ``` In order to persist this dataset and push it back to the hub, one currently needs to first load the entire filtered dataset on disk and then push: ``` from datasets import Dataset Dataset.from_generator(filtered_dataset.__iter__).push_to_hub(...) ``` It would be great if we can instead lazy push to the data to the hub (basically stream the data to the hub), not being limited by our disk size: ``` filtered_dataset.push_to_hub("my-filtered-dataset") ``` ### Motivation This feature would be very useful for people that want to filter huge datasets without having to load the entire dataset or a filtered version thereof on their local disk. ### Your contribution Happy to test out a PR :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5665/reactions", "total_count": 13, "+1": 13, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5665/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5664
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5664/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5664/comments
https://api.github.com/repos/huggingface/datasets/issues/5664/events
https://github.com/huggingface/datasets/pull/5664
1,637,192,684
PR_kwDODunzps5Mt6vp
5,664
Fix CI by temporarily pinning tensorflow < 2.12.0
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007500 / 0.011353 (-0.003853) | 0.005279 / 0.011008 (-0.005729) | 0.098848 / 0.038508 (0.060340) | 0.035290 / 0.023109 (0.012181) | 0.342676 / 0.275898 (0.066778) | 0.375310 / 0.323480 (0.051830) | 0.006037 / 0.007986 (-0.001948) | 0.004143 / 0.004328 (-0.000185) | 0.075757 / 0.004250 (0.071506) | 0.049436 / 0.037052 (0.012383) | 0.344734 / 0.258489 (0.086245) | 0.388111 / 0.293841 (0.094270) | 0.037079 / 0.128546 (-0.091467) | 0.011986 / 0.075646 (-0.063660) | 0.333911 / 0.419271 (-0.085361) | 0.050415 / 0.043533 (0.006882) | 0.341723 / 0.255139 (0.086584) | 0.364136 / 0.283200 (0.080936) | 0.099371 / 0.141683 (-0.042312) | 1.467030 / 1.452155 (0.014876) | 1.565472 / 1.492716 (0.072755) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212534 / 0.018006 (0.194528) | 0.435854 / 0.000490 (0.435364) | 0.000419 / 0.000200 (0.000219) | 0.000060 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027957 / 0.037411 (-0.009454) | 0.106835 / 0.014526 (0.092309) | 0.115733 / 0.176557 (-0.060824) | 0.172374 / 0.737135 (-0.564761) | 0.121907 / 0.296338 (-0.174431) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.413195 / 0.215209 (0.197986) | 4.144775 / 2.077655 (2.067120) | 1.885647 / 1.504120 (0.381527) | 1.645525 / 1.541195 (0.104331) | 1.690117 / 1.468490 (0.221627) | 0.705787 / 4.584777 (-3.878989) | 3.763338 / 3.745712 (0.017626) | 2.163044 / 5.269862 (-3.106818) | 1.478619 / 4.565676 (-3.087057) | 0.086458 / 0.424275 (-0.337817) | 0.012711 / 0.007607 (0.005103) | 0.503592 / 0.226044 (0.277547) | 5.031176 / 2.268929 (2.762248) | 2.345348 / 55.444624 (-53.099276) | 2.064573 / 6.876477 (-4.811903) | 2.203937 / 2.142072 (0.061865) | 0.838761 / 4.805227 (-3.966466) | 0.170116 / 6.500664 (-6.330548) | 0.064012 / 0.075469 (-0.011457) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.190887 / 1.841788 (-0.650901) | 15.091466 / 8.074308 (7.017158) | 14.549112 / 10.191392 (4.357720) | 0.180603 / 0.680424 (-0.499820) | 0.017387 / 0.534201 (-0.516814) | 0.421372 / 0.579283 (-0.157911) | 0.434644 / 0.434364 (0.000281) | 0.496958 / 0.540337 (-0.043380) | 0.593995 / 1.386936 (-0.792941) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007790 / 0.011353 (-0.003563) | 0.005307 / 0.011008 (-0.005701) | 0.074779 / 0.038508 (0.036271) | 0.034442 / 0.023109 (0.011332) | 0.337973 / 0.275898 (0.062075) | 0.371944 / 0.323480 (0.048464) | 0.006088 / 0.007986 (-0.001897) | 0.005619 / 0.004328 (0.001291) | 0.073757 / 0.004250 (0.069507) | 0.049385 / 0.037052 (0.012333) | 0.338326 / 0.258489 (0.079837) | 0.387916 / 0.293841 (0.094075) | 0.037197 / 0.128546 (-0.091350) | 0.012371 / 0.075646 (-0.063275) | 0.086938 / 0.419271 (-0.332334) | 0.051379 / 0.043533 (0.007846) | 0.331580 / 0.255139 (0.076441) | 0.355765 / 0.283200 (0.072565) | 0.103368 / 0.141683 (-0.038315) | 1.475963 / 1.452155 (0.023808) | 1.530579 / 1.492716 (0.037863) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223037 / 0.018006 (0.205031) | 0.441795 / 0.000490 (0.441305) | 0.003937 / 0.000200 (0.003737) | 0.000090 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030081 / 0.037411 (-0.007330) | 0.110366 / 0.014526 (0.095841) | 0.124097 / 0.176557 (-0.052459) | 0.176237 / 0.737135 (-0.560898) | 0.127045 / 0.296338 (-0.169293) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420191 / 0.215209 (0.204982) | 4.186721 / 2.077655 (2.109066) | 1.992336 / 1.504120 (0.488216) | 1.800567 / 1.541195 (0.259373) | 1.917982 / 1.468490 (0.449491) | 0.700932 / 4.584777 (-3.883845) | 3.888631 / 3.745712 (0.142918) | 2.138168 / 5.269862 (-3.131693) | 1.364636 / 4.565676 (-3.201041) | 0.085404 / 0.424275 (-0.338871) | 0.012550 / 0.007607 (0.004943) | 0.526110 / 0.226044 (0.300066) | 5.258717 / 2.268929 (2.989789) | 2.454287 / 55.444624 (-52.990338) | 2.130539 / 6.876477 (-4.745937) | 2.207982 / 2.142072 (0.065909) | 0.839242 / 4.805227 (-3.965985) | 0.167611 / 6.500664 (-6.333053) | 0.065706 / 0.075469 (-0.009763) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.266125 / 1.841788 (-0.575662) | 15.480513 / 8.074308 (7.406205) | 14.959376 / 10.191392 (4.767983) | 0.149195 / 0.680424 (-0.531229) | 0.017881 / 0.534201 (-0.516320) | 0.430863 / 0.579283 (-0.148420) | 0.432878 / 0.434364 (-0.001485) | 0.499605 / 0.540337 (-0.040733) | 0.605592 / 1.386936 (-0.781344) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c20230f8d8762fb67523677093e95e773ce88786 \"CML watermark\")\n" ]
"2023-03-23T09:52:26"
"2023-03-23T10:17:11"
"2023-03-23T10:09:54"
MEMBER
null
As a hotfix for our CI, temporarily pin `tensorflow` upper version: - In Python 3.10, tensorflow-2.12.0 also installs `jax` Fix #5663 Until root cause is fixed.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5664/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5664/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5664", "html_url": "https://github.com/huggingface/datasets/pull/5664", "diff_url": "https://github.com/huggingface/datasets/pull/5664.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5664.patch", "merged_at": "2023-03-23T10:09:53" }
true
https://api.github.com/repos/huggingface/datasets/issues/5663
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5663/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5663/comments
https://api.github.com/repos/huggingface/datasets/issues/5663/events
https://github.com/huggingface/datasets/issues/5663
1,637,173,248
I_kwDODunzps5hlUgA
5,663
CI is broken: ModuleNotFoundError: jax requires jaxlib to be installed
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
"2023-03-23T09:39:43"
"2023-03-23T10:09:55"
"2023-03-23T10:09:55"
MEMBER
null
CI test_py310 is broken: see https://github.com/huggingface/datasets/actions/runs/4498945505/jobs/7916194236?pr=5662 ``` FAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_map_jax_in_memory - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions. FAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_map_jax_on_disk - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions. FAILED tests/test_formatting.py::FormatterTest::test_jax_formatter - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions. FAILED tests/test_formatting.py::FormatterTest::test_jax_formatter_audio - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions. FAILED tests/test_formatting.py::FormatterTest::test_jax_formatter_device - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions. FAILED tests/test_formatting.py::FormatterTest::test_jax_formatter_image - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions. FAILED tests/test_formatting.py::FormatterTest::test_jax_formatter_jnp_array_kwargs - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions. FAILED tests/features/test_features.py::CastToPythonObjectsTest::test_cast_to_python_objects_jax - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions. ===== 8 failed, 2147 passed, 10 skipped, 37 warnings in 228.69s (0:03:48) ====== ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5663/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5663/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5662
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5662/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5662/comments
https://api.github.com/repos/huggingface/datasets/issues/5662/events
https://github.com/huggingface/datasets/pull/5662
1,637,140,813
PR_kwDODunzps5MtvsM
5,662
Fix unnecessary dict comprehension
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "I am merging because the CI error is unrelated.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009448 / 0.011353 (-0.001905) | 0.006156 / 0.011008 (-0.004852) | 0.123656 / 0.038508 (0.085147) | 0.034998 / 0.023109 (0.011889) | 0.374722 / 0.275898 (0.098824) | 0.418912 / 0.323480 (0.095432) | 0.007348 / 0.007986 (-0.000637) | 0.004779 / 0.004328 (0.000450) | 0.097541 / 0.004250 (0.093291) | 0.052523 / 0.037052 (0.015471) | 0.380118 / 0.258489 (0.121628) | 0.429448 / 0.293841 (0.135607) | 0.055156 / 0.128546 (-0.073390) | 0.019884 / 0.075646 (-0.055763) | 0.429613 / 0.419271 (0.010341) | 0.067554 / 0.043533 (0.024021) | 0.373940 / 0.255139 (0.118801) | 0.408115 / 0.283200 (0.124916) | 0.111353 / 0.141683 (-0.030329) | 1.821013 / 1.452155 (0.368858) | 1.972882 / 1.492716 (0.480165) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.236686 / 0.018006 (0.218679) | 0.516519 / 0.000490 (0.516029) | 0.009582 / 0.000200 (0.009383) | 0.000404 / 0.000054 (0.000349) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029425 / 0.037411 (-0.007986) | 0.123972 / 0.014526 (0.109446) | 0.133768 / 0.176557 (-0.042789) | 0.207562 / 0.737135 (-0.529573) | 0.142841 / 0.296338 (-0.153497) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.618531 / 0.215209 (0.403322) | 6.216854 / 2.077655 (4.139199) | 2.480138 / 1.504120 (0.976018) | 2.139884 / 1.541195 (0.598689) | 2.122992 / 1.468490 (0.654502) | 1.233824 / 4.584777 (-3.350953) | 5.426142 / 3.745712 (1.680430) | 4.891039 / 5.269862 (-0.378822) | 2.767033 / 4.565676 (-1.798643) | 0.142224 / 0.424275 (-0.282051) | 0.015754 / 0.007607 (0.008147) | 0.772210 / 0.226044 (0.546166) | 7.620484 / 2.268929 (5.351556) | 3.141617 / 55.444624 (-52.303007) | 2.471406 / 6.876477 (-4.405070) | 2.648008 / 2.142072 (0.505935) | 1.429281 / 4.805227 (-3.375946) | 0.255981 / 6.500664 (-6.244683) | 0.077710 / 0.075469 (0.002241) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.547714 / 1.841788 (-0.294073) | 17.859985 / 8.074308 (9.785677) | 21.791878 / 10.191392 (11.600486) | 0.238569 / 0.680424 (-0.441854) | 0.027520 / 0.534201 (-0.506681) | 0.553960 / 0.579283 (-0.025324) | 0.616165 / 0.434364 (0.181801) | 0.622492 / 0.540337 (0.082154) | 0.716345 / 1.386936 (-0.670591) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009624 / 0.011353 (-0.001729) | 0.006091 / 0.011008 (-0.004917) | 0.096623 / 0.038508 (0.058115) | 0.034903 / 0.023109 (0.011793) | 0.421009 / 0.275898 (0.145111) | 0.459236 / 0.323480 (0.135756) | 0.007778 / 0.007986 (-0.000207) | 0.004726 / 0.004328 (0.000398) | 0.099603 / 0.004250 (0.095353) | 0.051426 / 0.037052 (0.014373) | 0.420461 / 0.258489 (0.161972) | 0.469747 / 0.293841 (0.175906) | 0.053769 / 0.128546 (-0.074777) | 0.020636 / 0.075646 (-0.055011) | 0.115785 / 0.419271 (-0.303486) | 0.062692 / 0.043533 (0.019160) | 0.419388 / 0.255139 (0.164249) | 0.448675 / 0.283200 (0.165475) | 0.112099 / 0.141683 (-0.029584) | 1.787982 / 1.452155 (0.335827) | 1.884581 / 1.492716 (0.391864) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208837 / 0.018006 (0.190831) | 0.515593 / 0.000490 (0.515103) | 0.000447 / 0.000200 (0.000247) | 0.000086 / 0.000054 (0.000032) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031025 / 0.037411 (-0.006386) | 0.125179 / 0.014526 (0.110653) | 0.137050 / 0.176557 (-0.039506) | 0.203582 / 0.737135 (-0.533553) | 0.139209 / 0.296338 (-0.157130) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.601507 / 0.215209 (0.386298) | 6.034778 / 2.077655 (3.957123) | 2.550277 / 1.504120 (1.046157) | 2.242277 / 1.541195 (0.701082) | 2.306378 / 1.468490 (0.837888) | 1.251219 / 4.584777 (-3.333558) | 5.448698 / 3.745712 (1.702986) | 3.044666 / 5.269862 (-2.225196) | 2.000684 / 4.565676 (-2.564992) | 0.148385 / 0.424275 (-0.275890) | 0.015175 / 0.007607 (0.007567) | 0.800839 / 0.226044 (0.574795) | 8.062099 / 2.268929 (5.793171) | 3.400980 / 55.444624 (-52.043644) | 2.639583 / 6.876477 (-4.236894) | 2.660691 / 2.142072 (0.518618) | 1.467715 / 4.805227 (-3.337512) | 0.266429 / 6.500664 (-6.234235) | 0.076981 / 0.075469 (0.001512) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.621128 / 1.841788 (-0.220659) | 17.949989 / 8.074308 (9.875680) | 20.946426 / 10.191392 (10.755034) | 0.259357 / 0.680424 (-0.421067) | 0.026094 / 0.534201 (-0.508107) | 0.527840 / 0.579283 (-0.051443) | 0.629027 / 0.434364 (0.194663) | 0.603931 / 0.540337 (0.063594) | 0.711370 / 1.386936 (-0.675566) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2ccf01db81bb7b70f3ea97b185e345c2b1df0274 \"CML watermark\")\n" ]
"2023-03-23T09:18:58"
"2023-03-23T09:46:59"
"2023-03-23T09:37:49"
MEMBER
null
After ruff-0.0.258 release, the C416 rule was updated with unnecessary dict comprehensions. See: - https://github.com/charliermarsh/ruff/releases/tag/v0.0.258 - https://github.com/charliermarsh/ruff/pull/3605 This PR fixes one unnecessary dict comprehension in our code: no need to unpack and re-pack the tuple values. Fix #5661
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5662/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5662/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5662", "html_url": "https://github.com/huggingface/datasets/pull/5662", "diff_url": "https://github.com/huggingface/datasets/pull/5662.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5662.patch", "merged_at": "2023-03-23T09:37:49" }
true
https://api.github.com/repos/huggingface/datasets/issues/5661
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5661/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5661/comments
https://api.github.com/repos/huggingface/datasets/issues/5661/events
https://github.com/huggingface/datasets/issues/5661
1,637,129,445
I_kwDODunzps5hlJzl
5,661
CI is broken: Unnecessary `dict` comprehension
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
"2023-03-23T09:13:01"
"2023-03-23T09:37:51"
"2023-03-23T09:37:51"
MEMBER
null
CI check_code_quality is broken: ``` src/datasets/arrow_dataset.py:3267:35: C416 [*] Unnecessary `dict` comprehension (rewrite using `dict()`) Found 1 error. ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5661/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5661/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5660
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5660/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5660/comments
https://api.github.com/repos/huggingface/datasets/issues/5660/events
https://github.com/huggingface/datasets/issues/5660
1,635,543,646
I_kwDODunzps5hfGpe
5,660
integration with imbalanced-learn
{ "login": "tansaku", "id": 30216, "node_id": "MDQ6VXNlcjMwMjE2", "avatar_url": "https://avatars.githubusercontent.com/u/30216?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tansaku", "html_url": "https://github.com/tansaku", "followers_url": "https://api.github.com/users/tansaku/followers", "following_url": "https://api.github.com/users/tansaku/following{/other_user}", "gists_url": "https://api.github.com/users/tansaku/gists{/gist_id}", "starred_url": "https://api.github.com/users/tansaku/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tansaku/subscriptions", "organizations_url": "https://api.github.com/users/tansaku/orgs", "repos_url": "https://api.github.com/users/tansaku/repos", "events_url": "https://api.github.com/users/tansaku/events{/privacy}", "received_events_url": "https://api.github.com/users/tansaku/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 1935892913, "node_id": "MDU6TGFiZWwxOTM1ODkyOTEz", "url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": "This will not be worked on" } ]
closed
false
null
[]
null
[ "You can convert any dataset to pandas to be used with imbalanced-learn using `.to_pandas()`\r\n\r\nOtherwise if you want to keep a `Dataset` object and still use e.g. [make_imbalance](https://imbalanced-learn.org/stable/references/generated/imblearn.datasets.make_imbalance.html#imblearn.datasets.make_imbalance), you just need to pass the list of rows ids and labels:\r\n\r\n```python\r\nrow_indices = list(range(len(dataset)))\r\nresampled_row_indices, _ = make_imbalance(\r\n row_indices,\r\n dataset[\"label\"],\r\n sampling_strategy={0: 25, 1: 50, 2: 50},\r\n random_state=RANDOM_STATE,\r\n)\r\n\r\nresampled_dataset = dataset.select(resampled_row_indices)\r\n```" ]
"2023-03-22T11:05:17"
"2023-07-06T18:10:15"
"2023-07-06T18:10:15"
NONE
null
### Feature request Wouldn't it be great if the various class balancing operations from imbalanced-learn were available as part of datasets? ### Motivation I'm trying to use imbalanced-learn to balance a dataset, but it's not clear how to get the two to interoperate - what would be great would be some examples. I've looked online, asked gpt-4, but so far not making much progress. ### Your contribution If I can get this working myself I can submit a PR with example code to go in the docs
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5660/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5660/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5659
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5659/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5659/comments
https://api.github.com/repos/huggingface/datasets/issues/5659/events
https://github.com/huggingface/datasets/issues/5659
1,635,447,540
I_kwDODunzps5hevL0
5,659
[Audio] Soundfile/libsndfile requirements too stringent for decoding mp3 files
{ "login": "sanchit-gandhi", "id": 93869735, "node_id": "U_kgDOBZhWpw", "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanchit-gandhi", "html_url": "https://github.com/sanchit-gandhi", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "cc @polinaeterna @lhoestq ", "@sanchit-gandhi can you please also post the logs of `pip install soundfile==0.12.1`? To check what wheel is being installed or if it's being built from source (I think it's the latter case). \r\nRequired `libsndfile` binary **should** be bundeled with `soundfile` wheel but I assume it **might not** be the case for some non standard Linux distributions. \r\nThe only solution for using `soundfile` here is to build [`libsndfile`](https://github.com/libsndfile/libsndfile) from source:\r\n\r\n```bash\r\ngit clone https://github.com/libsndfile/libsndfile.git\r\ncd libsndfile/\r\nautoreconf -vif\r\n./configure --enable-werror \r\nmake\r\nmake install\r\n```\r\nfor this, some building libraries should be installed, for Debian/Ubuntu it's like:\r\n```bash\r\napt install autoconf autogen automake build-essential libasound2-dev \\\r\n libflac-dev libogg-dev libtool libvorbis-dev libopus-dev libmp3lame-dev \\\r\n libmpg123-dev pkg-config python\r\n```\r\nbut for other Linux distributions it might be different.\r\n\r\nWhen the binary is compiled, it should be put into location where `soundfile` would search for it (the directory is named `_soundfile_data`), it depends on where`libsdfile` (from the previous step) and `soundfile` were installed, might be something like this:\r\n\r\n```bash\r\ncp /usr/local/lib/libsndfile.so /usr/local/lib/python3.7/dist-packages/_soundfile_data/\r\ncp /usr/local/lib/libsndfile.la /usr/local/lib/python3.7/dist-packages/_soundfile_data/\r\n```\r\n\r\nAnother solution is to not use `soundfile` and apply custom processing function with `torchaudio` while setting `decode=False` in `Audio` feature and passing custom function to `.map`. ", "Not sure if it may help, but you could also try updating `pip` before installing soundfile", "@lhoestq @sanchit-gandhi. I encountered the same error (also on the TPU v4) when trying to run `datasets` from source.\r\n\r\nDowngrading soundfile with `pip install soundfile==0.12.0` seems to fix the issue for me.", "Maybe let's open an issue at https://github.com/bastibe/python-soundfile/issues in case they might know why you get `OSError: cannot load library 'libsndfile.so'` ?", "> @sanchit-gandhi can you please also post the logs of `pip install soundfile==0.12.1`? To check what wheel is being installed or if it's being built from source (I think it's the latter case). Required `libsndfile` binary **should** be bundeled with `soundfile` wheel but I assume it **might not** be the case for some non standard Linux distributions. The only solution for using `soundfile` here is to build [`libsndfile`](https://github.com/libsndfile/libsndfile) from source:\r\n> \r\n> ```shell\r\n> git clone https://github.com/libsndfile/libsndfile.git\r\n> cd libsndfile/\r\n> autoreconf -vif\r\n> ./configure --enable-werror \r\n> make\r\n> make install\r\n> ```\r\n\r\nThis fixed the issue for me. After installing libsndfile as described above, I had to uninstall soundfile and re-install it with this command. `pip install \"soundfile>=0.12.1\"`", "Thank you so much for the comprehensive instructions @polinaeterna! Also confirming that they worked for me 🤗 In my case, I had to run several of these commands under \"sudo\" for privileges, but otherwise this workaround gave a successful `libsndfile` install:\r\n\r\n1. Grab source code:\r\n```\r\ngit clone https://github.com/libsndfile/libsndfile.git\r\n```\r\n\r\n2. Set up a build environment:\r\n```\r\nsudo apt install autoconf autogen automake build-essential libasound2-dev \\\r\n libflac-dev libogg-dev libtool libvorbis-dev libopus-dev libmp3lame-dev \\\r\n libmpg123-dev pkg-config python\r\n```\r\n\r\n3. Build and test `libsndfile`:\r\n\r\n```\r\nautoreconf -vif\r\n./configure --enable-werror\r\nsudo make\r\nsudo make check\r\n```\r\n\r\n4. Create `_soundfile_data` submodule (if it does not exist already):\r\n```\r\nsudo mkdir /usr/local/lib/python3.8/dist-packages/_soundfile_data/\r\n```\r\n\r\n5. Copy `libsndfile` files into submodule:\r\n```\r\nsudo cp /usr/local/lib/libsndfile.* /usr/local/lib/python3.8/dist-packages/_soundfile_data/\r\n```", "On a different machine, I also tried separately by first upgrading pip, then installing soundfile. This worked too! Thanks @lhoestq 🙌", "> @sanchit-gandhi can you please also post the logs of `pip install soundfile==0.12.1`? To check what wheel is being installed or if it's being built from source (I think it's the latter case). Required `libsndfile` binary **should** be bundeled with `soundfile` wheel but I assume it **might not** be the case for some non standard Linux distributions. The only solution for using `soundfile` here is to build [`libsndfile`](https://github.com/libsndfile/libsndfile) from source:\r\n> \r\n> ```shell\r\n> git clone https://github.com/libsndfile/libsndfile.git\r\n> cd libsndfile/\r\n> autoreconf -vif\r\n> ./configure --enable-werror \r\n> make\r\n> make install\r\n> ```\r\n> \r\n> for this, some building libraries should be installed, for Debian/Ubuntu it's like:\r\n> \r\n> ```shell\r\n> apt install autoconf autogen automake build-essential libasound2-dev \\\r\n> libflac-dev libogg-dev libtool libvorbis-dev libopus-dev libmp3lame-dev \\\r\n> libmpg123-dev pkg-config python\r\n> ```\r\n> \r\n> but for other Linux distributions it might be different.\r\n> \r\n> When the binary is compiled, it should be put into location where `soundfile` would search for it (the directory is named `_soundfile_data`), it depends on where`libsdfile` (from the previous step) and `soundfile` were installed, might be something like this:\r\n> \r\n> ```shell\r\n> cp /usr/local/lib/libsndfile.so /usr/local/lib/python3.7/dist-packages/_soundfile_data/\r\n> cp /usr/local/lib/libsndfile.la /usr/local/lib/python3.7/dist-packages/_soundfile_data/\r\n> ```\r\n> \r\n> Another solution is to not use `soundfile` and apply custom processing function with `torchaudio` while setting `decode=False` in `Audio` feature and passing custom function to `.map`.\r\n\r\nThanks, the solution solved my problem. \r\n\r\n1. Purge uninstall libsndfile, uninstall python-soundfile.\r\n2. Build libsndfile from source code and install.\r\n3. Build python-soundfile from source code and install\r\n4. Well done.", "> Thank you so much for the comprehensive instructions @polinaeterna! Also confirming that they worked for me 🤗 In my case, I had to run several of these commands under \"sudo\" for privileges, but otherwise this workaround gave a successful `libsndfile` install:\r\n> \r\n> 1. Grab source code:\r\n> \r\n> ```\r\n> git clone https://github.com/libsndfile/libsndfile.git\r\n> ```\r\n> \r\n> 2. Set up a build environment:\r\n> \r\n> ```\r\n> sudo apt install autoconf autogen automake build-essential libasound2-dev \\\r\n> libflac-dev libogg-dev libtool libvorbis-dev libopus-dev libmp3lame-dev \\\r\n> libmpg123-dev pkg-config python\r\n> ```\r\n> \r\n> 3. Build and test `libsndfile`:\r\n> \r\n> ```\r\n> autoreconf -vif\r\n> ./configure --enable-werror\r\n> sudo make\r\n> sudo make check\r\n> ```\r\n> \r\n> 4. Create `_soundfile_data` submodule (if it does not exist already):\r\n> \r\n> ```\r\n> sudo mkdir /usr/local/lib/python3.8/dist-packages/_soundfile_data/\r\n> ```\r\n> \r\n> 5. Copy `libsndfile` files into submodule:\r\n> \r\n> ```\r\n> sudo cp /usr/local/lib/libsndfile.* /usr/local/lib/python3.8/dist-packages/_soundfile_data/\r\n> ```\r\n\r\nI had to run 'make install' or the `/usr/local/lib/libsndfile.*` files didn't exist.\r\n\r\nIt's working though!", "I had the same issue but it is working now! Thanks for all of your comments!", "I had the same issue on SageMaker but not on Colab;\r\nThe `soundfile` versioning was fine.\r\n\r\n my approach to solve it was to match {\"numpy\", \"numba\"} exact versions\r\n\r\n```\r\n! pip install \"numpy==1.23.5\"\r\n! pip install \"numpy==0.58.1\"\r\n\r\n```\r\nthe numbers are from Colab where successfully I could do the job.\r\n\r\n" ]
"2023-03-22T10:07:33"
"2024-01-17T13:59:22"
"2023-04-07T08:51:28"
CONTRIBUTOR
null
### Describe the bug I'm encountering several issues trying to load mp3 audio files using `datasets` on a TPU v4. The PR https://github.com/huggingface/datasets/pull/5573 updated the audio loading logic to rely solely on the `soundfile`/`libsndfile` libraries for loading audio samples, regardless of their file type. The installation guide suggests that `libsndfile` is bundled in when `soundfile` is pip installed: https://github.com/huggingface/datasets/blob/e1af108015e43f9df8734a1faeeaeb9eafce3971/docs/source/installation.md?plain=1#L70-L71 However, just pip installing `soundfile==0.12.1` throws an error that `libsndfile` is missing: ``` pip install soundfile==0.12.1 ``` Then: ```python >>> soundfile >>> soundfile.__libsndfile_version__ ``` <details> <summary> Traceback (most recent call last): </summary> ``` File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/soundfile.py", line 161, in <module> import _soundfile_data # ImportError if this doesn't exist ModuleNotFoundError: No module named '_soundfile_data' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/soundfile.py", line 170, in <module> raise OSError('sndfile library not found using ctypes.util.find_library') OSError: sndfile library not found using ctypes.util.find_library During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/soundfile.py", line 192, in <module> _snd = _ffi.dlopen(_explicit_libname) OSError: cannot load library 'libsndfile.so': libsndfile.so: cannot open shared object file: No such file or directory ``` </details> Thus, I've followed the official instructions for installing the `soundfile` package from https://github.com/bastibe/python-soundfile#installation, which states that `libsndfile` needs to be installed separately as: ``` pip install --upgrade soundfile sudo apt install libsndfile1 ``` We can now import `soundfile`: ```python >>> import soundfile >>> soundfile.__version__ '0.12.1' >>> soundfile.__libsndfile_version__ '1.0.28' ``` We see that we have `soundfile==0.12.1`, which matches the `datasets[audio]` package constraints: https://github.com/huggingface/datasets/blob/e1af108015e43f9df8734a1faeeaeb9eafce3971/setup.py#L144-L147 But we have `libsndfile==1.0.28`, which is too low for decoding mp3 files: https://github.com/huggingface/datasets/blob/e1af108015e43f9df8734a1faeeaeb9eafce3971/src/datasets/config.py#L136-L138 Updating/upgrading the `libsndfile` doesn't change this: ``` sudo apt-get update sudo apt-get upgrade ``` Is there any other suggestion for how to get a compatible `libsndfile` version? Currently, the version bundled with Ubuntu `apt-get` is too low for decoding mp3 files. Maybe we could add this under `setup.py` such that we install the correct `libsndfile` version when we do `pip install datasets[audio]`? IMO this would help circumvent such version issues. ### Steps to reproduce the bug Environment described above. Loading mp3 files: ```python from datasets import load_dataset common_voice_es = load_dataset("common_voice", "es", split="validation", streaming=True) print(next(iter(common_voice_es))) ``` ```python --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) Cell In[4], line 2 1 common_voice_es = load_dataset("common_voice", "es", split="validation", streaming=True) ----> 2 print(next(iter(common_voice_es))) File ~/datasets/src/datasets/iterable_dataset.py:941, in IterableDataset.__iter__(self) 937 for key, example in ex_iterable: 938 if self.features: 939 # `IterableDataset` automatically fills missing columns with None. 940 # This is done with `_apply_feature_types_on_example`. --> 941 yield _apply_feature_types_on_example( 942 example, self.features, token_per_repo_id=self._token_per_repo_id 943 ) 944 else: 945 yield example File ~/datasets/src/datasets/iterable_dataset.py:700, in _apply_feature_types_on_example(example, features, token_per_repo_id) 698 encoded_example = features.encode_example(example) 699 # Decode example for Audio feature, e.g. --> 700 decoded_example = features.decode_example(encoded_example, token_per_repo_id=token_per_repo_id) 701 return decoded_example File ~/datasets/src/datasets/features/features.py:1864, in Features.decode_example(self, example, token_per_repo_id) 1850 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None): 1851 """Decode example with custom feature decoding. 1852 1853 Args: (...) 1861 `dict[str, Any]` 1862 """ -> 1864 return { 1865 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) 1866 if self._column_requires_decoding[column_name] 1867 else value 1868 for column_name, (feature, value) in zip_dict( 1869 {key: value for key, value in self.items() if key in example}, example 1870 ) 1871 } File ~/datasets/src/datasets/features/features.py:1865, in <dictcomp>(.0) 1850 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None): 1851 """Decode example with custom feature decoding. 1852 1853 Args: (...) 1861 `dict[str, Any]` 1862 """ 1864 return { -> 1865 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) 1866 if self._column_requires_decoding[column_name] 1867 else value 1868 for column_name, (feature, value) in zip_dict( 1869 {key: value for key, value in self.items() if key in example}, example 1870 ) 1871 } File ~/datasets/src/datasets/features/features.py:1308, in decode_nested_example(schema, obj, token_per_repo_id) 1305 elif isinstance(schema, (Audio, Image)): 1306 # we pass the token to read and decode files from private repositories in streaming mode 1307 if obj is not None and schema.decode: -> 1308 return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) 1309 return obj File ~/datasets/src/datasets/features/audio.py:167, in Audio.decode_example(self, value, token_per_repo_id) 162 raise RuntimeError( 163 "Decoding 'opus' files requires system library 'libsndfile'>=1.0.31, " 164 'You can try to update `soundfile` python library: `pip install "soundfile>=0.12.1"`. ' 165 ) 166 elif not config.IS_MP3_SUPPORTED and audio_format == "mp3": --> 167 raise RuntimeError( 168 "Decoding 'mp3' files requires system library 'libsndfile'>=1.1.0, " 169 'You can try to update `soundfile` python library: `pip install "soundfile>=0.12.1"`. ' 170 ) 172 if file is None: 173 token_per_repo_id = token_per_repo_id or {} RuntimeError: Decoding 'mp3' files requires system library 'libsndfile'>=1.1.0, You can try to update `soundfile` python library: `pip install "soundfile>=0.12.1"`. ``` ### Expected behavior Load mp3 files! ### Environment info - `datasets` version: 2.10.2.dev0 - Platform: Linux-5.13.0-1023-gcp-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.13.1 - PyArrow version: 11.0.0 - Pandas version: 1.5.3 - Soundfile version: 0.12.1 - Libsndfile version: 1.0.28
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5659/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5659/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5658
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5658/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5658/comments
https://api.github.com/repos/huggingface/datasets/issues/5658/events
https://github.com/huggingface/datasets/pull/5658
1,634,867,204
PR_kwDODunzps5MmJe0
5,658
docs: Update num_shards docs to mention num_proc on Dataset and DatasetDict
{ "login": "connor-henderson", "id": 78612354, "node_id": "MDQ6VXNlcjc4NjEyMzU0", "avatar_url": "https://avatars.githubusercontent.com/u/78612354?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-henderson", "html_url": "https://github.com/connor-henderson", "followers_url": "https://api.github.com/users/connor-henderson/followers", "following_url": "https://api.github.com/users/connor-henderson/following{/other_user}", "gists_url": "https://api.github.com/users/connor-henderson/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-henderson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-henderson/subscriptions", "organizations_url": "https://api.github.com/users/connor-henderson/orgs", "repos_url": "https://api.github.com/users/connor-henderson/repos", "events_url": "https://api.github.com/users/connor-henderson/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-henderson/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007351 / 0.011353 (-0.004002) | 0.005025 / 0.011008 (-0.005983) | 0.095978 / 0.038508 (0.057470) | 0.033486 / 0.023109 (0.010377) | 0.294427 / 0.275898 (0.018529) | 0.325157 / 0.323480 (0.001677) | 0.005671 / 0.007986 (-0.002315) | 0.005284 / 0.004328 (0.000955) | 0.073159 / 0.004250 (0.068909) | 0.045162 / 0.037052 (0.008110) | 0.294004 / 0.258489 (0.035515) | 0.343545 / 0.293841 (0.049704) | 0.036857 / 0.128546 (-0.091689) | 0.012245 / 0.075646 (-0.063401) | 0.332258 / 0.419271 (-0.087014) | 0.051909 / 0.043533 (0.008377) | 0.295701 / 0.255139 (0.040562) | 0.315247 / 0.283200 (0.032048) | 0.102363 / 0.141683 (-0.039320) | 1.441944 / 1.452155 (-0.010211) | 1.527161 / 1.492716 (0.034445) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211769 / 0.018006 (0.193763) | 0.452015 / 0.000490 (0.451525) | 0.004041 / 0.000200 (0.003841) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027396 / 0.037411 (-0.010015) | 0.108318 / 0.014526 (0.093793) | 0.116851 / 0.176557 (-0.059706) | 0.172658 / 0.737135 (-0.564478) | 0.122876 / 0.296338 (-0.173462) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.406484 / 0.215209 (0.191275) | 4.053849 / 2.077655 (1.976194) | 1.842947 / 1.504120 (0.338827) | 1.649473 / 1.541195 (0.108278) | 1.728629 / 1.468490 (0.260139) | 0.699519 / 4.584777 (-3.885258) | 3.730823 / 3.745712 (-0.014889) | 2.139624 / 5.269862 (-3.130237) | 1.487839 / 4.565676 (-3.077837) | 0.086699 / 0.424275 (-0.337576) | 0.012815 / 0.007607 (0.005208) | 0.514014 / 0.226044 (0.287969) | 5.153315 / 2.268929 (2.884387) | 2.324431 / 55.444624 (-53.120193) | 1.971533 / 6.876477 (-4.904944) | 2.074480 / 2.142072 (-0.067592) | 0.842419 / 4.805227 (-3.962808) | 0.169140 / 6.500664 (-6.331524) | 0.065206 / 0.075469 (-0.010263) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.180887 / 1.841788 (-0.660901) | 14.627401 / 8.074308 (6.553093) | 14.382699 / 10.191392 (4.191307) | 0.143986 / 0.680424 (-0.536438) | 0.017460 / 0.534201 (-0.516741) | 0.422100 / 0.579283 (-0.157183) | 0.417474 / 0.434364 (-0.016890) | 0.493712 / 0.540337 (-0.046625) | 0.589744 / 1.386936 (-0.797193) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007538 / 0.011353 (-0.003815) | 0.005122 / 0.011008 (-0.005887) | 0.073858 / 0.038508 (0.035350) | 0.034561 / 0.023109 (0.011451) | 0.341250 / 0.275898 (0.065352) | 0.373063 / 0.323480 (0.049583) | 0.005785 / 0.007986 (-0.002200) | 0.005393 / 0.004328 (0.001065) | 0.072354 / 0.004250 (0.068104) | 0.047005 / 0.037052 (0.009953) | 0.341179 / 0.258489 (0.082690) | 0.386299 / 0.293841 (0.092458) | 0.038315 / 0.128546 (-0.090231) | 0.012200 / 0.075646 (-0.063446) | 0.086132 / 0.419271 (-0.333140) | 0.049873 / 0.043533 (0.006340) | 0.337985 / 0.255139 (0.082846) | 0.354806 / 0.283200 (0.071607) | 0.103557 / 0.141683 (-0.038126) | 1.445682 / 1.452155 (-0.006473) | 1.551008 / 1.492716 (0.058291) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235873 / 0.018006 (0.217867) | 0.448445 / 0.000490 (0.447955) | 0.001307 / 0.000200 (0.001108) | 0.000087 / 0.000054 (0.000032) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029809 / 0.037411 (-0.007603) | 0.108833 / 0.014526 (0.094307) | 0.123289 / 0.176557 (-0.053268) | 0.176516 / 0.737135 (-0.560620) | 0.127186 / 0.296338 (-0.169153) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422037 / 0.215209 (0.206828) | 4.188073 / 2.077655 (2.110418) | 1.999295 / 1.504120 (0.495175) | 1.809229 / 1.541195 (0.268034) | 1.930798 / 1.468490 (0.462308) | 0.694371 / 4.584777 (-3.890406) | 3.833432 / 3.745712 (0.087719) | 3.235600 / 5.269862 (-2.034262) | 1.867822 / 4.565676 (-2.697854) | 0.085734 / 0.424275 (-0.338541) | 0.012727 / 0.007607 (0.005120) | 0.542261 / 0.226044 (0.316217) | 5.289366 / 2.268929 (3.020437) | 2.469636 / 55.444624 (-52.974988) | 2.139392 / 6.876477 (-4.737084) | 2.193305 / 2.142072 (0.051233) | 0.846747 / 4.805227 (-3.958481) | 0.168965 / 6.500664 (-6.331699) | 0.064463 / 0.075469 (-0.011006) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.263818 / 1.841788 (-0.577970) | 15.254642 / 8.074308 (7.180334) | 14.428111 / 10.191392 (4.236719) | 0.164770 / 0.680424 (-0.515654) | 0.017476 / 0.534201 (-0.516725) | 0.420198 / 0.579283 (-0.159085) | 0.443250 / 0.434364 (0.008886) | 0.496904 / 0.540337 (-0.043434) | 0.596541 / 1.386936 (-0.790395) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4db8e33eb9cf6cd4453cdfa246c065e0eedf170c \"CML watermark\")\n" ]
"2023-03-22T00:12:18"
"2023-03-24T16:43:34"
"2023-03-24T16:36:21"
CONTRIBUTOR
null
Closes #5653 @mariosasko
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5658/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5658/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5658", "html_url": "https://github.com/huggingface/datasets/pull/5658", "diff_url": "https://github.com/huggingface/datasets/pull/5658.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5658.patch", "merged_at": "2023-03-24T16:36:21" }
true
https://api.github.com/repos/huggingface/datasets/issues/5656
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5656/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5656/comments
https://api.github.com/repos/huggingface/datasets/issues/5656/events
https://github.com/huggingface/datasets/pull/5656
1,634,156,563
PR_kwDODunzps5Mjxoo
5,656
Fix `fsspec.open` when using an HTTP proxy
{ "login": "bryant1410", "id": 3905501, "node_id": "MDQ6VXNlcjM5MDU1MDE=", "avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bryant1410", "html_url": "https://github.com/bryant1410", "followers_url": "https://api.github.com/users/bryant1410/followers", "following_url": "https://api.github.com/users/bryant1410/following{/other_user}", "gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}", "starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions", "organizations_url": "https://api.github.com/users/bryant1410/orgs", "repos_url": "https://api.github.com/users/bryant1410/repos", "events_url": "https://api.github.com/users/bryant1410/events{/privacy}", "received_events_url": "https://api.github.com/users/bryant1410/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007980 / 0.011353 (-0.003373) | 0.005351 / 0.011008 (-0.005657) | 0.096325 / 0.038508 (0.057817) | 0.034204 / 0.023109 (0.011095) | 0.328080 / 0.275898 (0.052182) | 0.361519 / 0.323480 (0.038039) | 0.005954 / 0.007986 (-0.002032) | 0.004106 / 0.004328 (-0.000222) | 0.072827 / 0.004250 (0.068576) | 0.050522 / 0.037052 (0.013470) | 0.326975 / 0.258489 (0.068486) | 0.373180 / 0.293841 (0.079339) | 0.037024 / 0.128546 (-0.091522) | 0.012347 / 0.075646 (-0.063299) | 0.332341 / 0.419271 (-0.086931) | 0.050695 / 0.043533 (0.007162) | 0.328298 / 0.255139 (0.073159) | 0.352808 / 0.283200 (0.069608) | 0.101637 / 0.141683 (-0.040046) | 1.435172 / 1.452155 (-0.016982) | 1.529797 / 1.492716 (0.037080) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.305727 / 0.018006 (0.287721) | 0.583951 / 0.000490 (0.583462) | 0.011699 / 0.000200 (0.011499) | 0.000345 / 0.000054 (0.000290) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027917 / 0.037411 (-0.009495) | 0.107698 / 0.014526 (0.093173) | 0.120572 / 0.176557 (-0.055985) | 0.176066 / 0.737135 (-0.561069) | 0.125348 / 0.296338 (-0.170991) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.411980 / 0.215209 (0.196771) | 4.113135 / 2.077655 (2.035480) | 1.868725 / 1.504120 (0.364605) | 1.677422 / 1.541195 (0.136227) | 1.796759 / 1.468490 (0.328269) | 0.701957 / 4.584777 (-3.882820) | 3.830742 / 3.745712 (0.085030) | 2.170444 / 5.269862 (-3.099418) | 1.345097 / 4.565676 (-3.220580) | 0.086661 / 0.424275 (-0.337614) | 0.013073 / 0.007607 (0.005466) | 0.519150 / 0.226044 (0.293106) | 5.193447 / 2.268929 (2.924518) | 2.391155 / 55.444624 (-53.053470) | 2.076610 / 6.876477 (-4.799867) | 2.245557 / 2.142072 (0.103484) | 0.846496 / 4.805227 (-3.958731) | 0.169246 / 6.500664 (-6.331418) | 0.066360 / 0.075469 (-0.009109) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.196344 / 1.841788 (-0.645444) | 15.640363 / 8.074308 (7.566055) | 14.936144 / 10.191392 (4.744752) | 0.163613 / 0.680424 (-0.516811) | 0.017900 / 0.534201 (-0.516301) | 0.425377 / 0.579283 (-0.153906) | 0.431119 / 0.434364 (-0.003245) | 0.513669 / 0.540337 (-0.026669) | 0.592970 / 1.386936 (-0.793966) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007958 / 0.011353 (-0.003395) | 0.005707 / 0.011008 (-0.005301) | 0.075377 / 0.038508 (0.036869) | 0.037126 / 0.023109 (0.014016) | 0.344589 / 0.275898 (0.068691) | 0.381060 / 0.323480 (0.057580) | 0.006592 / 0.007986 (-0.001393) | 0.004479 / 0.004328 (0.000151) | 0.074456 / 0.004250 (0.070206) | 0.054087 / 0.037052 (0.017035) | 0.344942 / 0.258489 (0.086453) | 0.393174 / 0.293841 (0.099333) | 0.037926 / 0.128546 (-0.090620) | 0.012638 / 0.075646 (-0.063009) | 0.087743 / 0.419271 (-0.331529) | 0.050081 / 0.043533 (0.006548) | 0.340406 / 0.255139 (0.085267) | 0.361487 / 0.283200 (0.078287) | 0.108546 / 0.141683 (-0.033137) | 1.424626 / 1.452155 (-0.027529) | 1.553958 / 1.492716 (0.061242) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.329922 / 0.018006 (0.311916) | 0.523239 / 0.000490 (0.522749) | 0.012164 / 0.000200 (0.011964) | 0.000137 / 0.000054 (0.000082) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031935 / 0.037411 (-0.005477) | 0.115680 / 0.014526 (0.101154) | 0.130062 / 0.176557 (-0.046494) | 0.180679 / 0.737135 (-0.556457) | 0.135548 / 0.296338 (-0.160790) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.429648 / 0.215209 (0.214439) | 4.303342 / 2.077655 (2.225687) | 1.999395 / 1.504120 (0.495275) | 1.810354 / 1.541195 (0.269160) | 1.963132 / 1.468490 (0.494642) | 0.701654 / 4.584777 (-3.883122) | 3.844687 / 3.745712 (0.098975) | 2.153425 / 5.269862 (-3.116436) | 1.351541 / 4.565676 (-3.214135) | 0.086292 / 0.424275 (-0.337983) | 0.012491 / 0.007607 (0.004883) | 0.523144 / 0.226044 (0.297099) | 5.243283 / 2.268929 (2.974355) | 2.465849 / 55.444624 (-52.978775) | 2.154505 / 6.876477 (-4.721972) | 2.245500 / 2.142072 (0.103428) | 0.838902 / 4.805227 (-3.966326) | 0.169441 / 6.500664 (-6.331223) | 0.065631 / 0.075469 (-0.009838) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.262175 / 1.841788 (-0.579612) | 15.424650 / 8.074308 (7.350342) | 15.000718 / 10.191392 (4.809326) | 0.186328 / 0.680424 (-0.494096) | 0.018076 / 0.534201 (-0.516125) | 0.433458 / 0.579283 (-0.145825) | 0.424213 / 0.434364 (-0.010151) | 0.546568 / 0.540337 (0.006231) | 0.643529 / 1.386936 (-0.743407) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ea7298bf121d7ae8079f0a59deb67c2fa1d4df6a \"CML watermark\")\n" ]
"2023-03-21T15:23:29"
"2023-03-23T14:14:50"
"2023-03-23T13:15:46"
CONTRIBUTOR
null
Most HTTP(S) downloads from this library support proxy automatically by reading the `HTTP_PROXY` environment variable (et al.) because `requests` is widely used. However, in some parts of the code, `fsspec` is used, which in turn uses `aiohttp` for HTTP(S) requests (as opposed to `requests`), which in turn doesn't support reading proxy env variables by default. This PR enables reading them automatically. Read [aiohttp docs on using proxies](https://docs.aiohttp.org/en/stable/client_advanced.html?highlight=trust_env#proxy-support). For context, [the Python library requests](https://requests.readthedocs.io/en/latest/user/advanced/?highlight=http_proxy#proxies) and [the official Python library via `urllib.urlopen` support this automatically by default](https://docs.python.org/3/library/urllib.request.html#urllib.request.urlopen). Many (most common ones?) programs also do the same, including cURL, APT, Wget, and many others.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5656/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5656/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5656", "html_url": "https://github.com/huggingface/datasets/pull/5656", "diff_url": "https://github.com/huggingface/datasets/pull/5656.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5656.patch", "merged_at": "2023-03-23T13:15:46" }
true
https://api.github.com/repos/huggingface/datasets/issues/5655
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5655/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5655/comments
https://api.github.com/repos/huggingface/datasets/issues/5655/events
https://github.com/huggingface/datasets/pull/5655
1,634,030,017
PR_kwDODunzps5MjWYy
5,655
Improve features decoding in to_iterable_dataset
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009691 / 0.011353 (-0.001662) | 0.006160 / 0.011008 (-0.004848) | 0.127528 / 0.038508 (0.089020) | 0.034445 / 0.023109 (0.011335) | 0.391483 / 0.275898 (0.115585) | 0.425922 / 0.323480 (0.102442) | 0.006621 / 0.007986 (-0.001365) | 0.004550 / 0.004328 (0.000221) | 0.099134 / 0.004250 (0.094884) | 0.051089 / 0.037052 (0.014037) | 0.398675 / 0.258489 (0.140186) | 0.456740 / 0.293841 (0.162899) | 0.052279 / 0.128546 (-0.076267) | 0.020878 / 0.075646 (-0.054768) | 0.414954 / 0.419271 (-0.004317) | 0.061903 / 0.043533 (0.018370) | 0.393088 / 0.255139 (0.137949) | 0.410289 / 0.283200 (0.127089) | 0.101684 / 0.141683 (-0.039998) | 1.747102 / 1.452155 (0.294947) | 1.896976 / 1.492716 (0.404260) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203193 / 0.018006 (0.185187) | 0.495011 / 0.000490 (0.494521) | 0.006290 / 0.000200 (0.006090) | 0.000098 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034840 / 0.037411 (-0.002571) | 0.122529 / 0.014526 (0.108003) | 0.133870 / 0.176557 (-0.042686) | 0.207771 / 0.737135 (-0.529364) | 0.141441 / 0.296338 (-0.154897) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.604190 / 0.215209 (0.388981) | 6.040295 / 2.077655 (3.962641) | 2.405703 / 1.504120 (0.901583) | 2.062767 / 1.541195 (0.521572) | 2.079313 / 1.468490 (0.610823) | 1.240107 / 4.584777 (-3.344670) | 5.316583 / 3.745712 (1.570871) | 3.104758 / 5.269862 (-2.165103) | 2.056489 / 4.565676 (-2.509187) | 0.149060 / 0.424275 (-0.275215) | 0.014467 / 0.007607 (0.006860) | 0.736882 / 0.226044 (0.510838) | 7.324142 / 2.268929 (5.055213) | 3.048752 / 55.444624 (-52.395872) | 2.385013 / 6.876477 (-4.491463) | 2.457478 / 2.142072 (0.315405) | 1.459276 / 4.805227 (-3.345951) | 0.253882 / 6.500664 (-6.246782) | 0.076756 / 0.075469 (0.001287) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.499166 / 1.841788 (-0.342622) | 17.294165 / 8.074308 (9.219857) | 20.385668 / 10.191392 (10.194276) | 0.254633 / 0.680424 (-0.425791) | 0.026253 / 0.534201 (-0.507948) | 0.532928 / 0.579283 (-0.046355) | 0.606095 / 0.434364 (0.171731) | 0.615025 / 0.540337 (0.074687) | 0.728651 / 1.386936 (-0.658285) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009376 / 0.011353 (-0.001977) | 0.005981 / 0.011008 (-0.005027) | 0.109898 / 0.038508 (0.071390) | 0.033746 / 0.023109 (0.010637) | 0.410226 / 0.275898 (0.134328) | 0.470606 / 0.323480 (0.147126) | 0.006706 / 0.007986 (-0.001279) | 0.004482 / 0.004328 (0.000153) | 0.092280 / 0.004250 (0.088030) | 0.047988 / 0.037052 (0.010935) | 0.430628 / 0.258489 (0.172139) | 0.480668 / 0.293841 (0.186827) | 0.052099 / 0.128546 (-0.076447) | 0.018743 / 0.075646 (-0.056903) | 0.112204 / 0.419271 (-0.307068) | 0.059838 / 0.043533 (0.016305) | 0.418230 / 0.255139 (0.163091) | 0.451568 / 0.283200 (0.168368) | 0.107026 / 0.141683 (-0.034657) | 1.708111 / 1.452155 (0.255956) | 1.839268 / 1.492716 (0.346552) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229558 / 0.018006 (0.211552) | 0.488099 / 0.000490 (0.487609) | 0.004643 / 0.000200 (0.004443) | 0.000107 / 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030461 / 0.037411 (-0.006951) | 0.120993 / 0.014526 (0.106467) | 0.130874 / 0.176557 (-0.045682) | 0.193550 / 0.737135 (-0.543585) | 0.138164 / 0.296338 (-0.158174) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.635709 / 0.215209 (0.420500) | 6.225112 / 2.077655 (4.147457) | 2.639584 / 1.504120 (1.135465) | 2.254487 / 1.541195 (0.713293) | 2.280478 / 1.468490 (0.811988) | 1.205712 / 4.584777 (-3.379065) | 5.367845 / 3.745712 (1.622133) | 3.020207 / 5.269862 (-2.249655) | 2.001897 / 4.565676 (-2.563779) | 0.149582 / 0.424275 (-0.274693) | 0.014867 / 0.007607 (0.007260) | 0.759050 / 0.226044 (0.533006) | 7.692969 / 2.268929 (5.424041) | 3.274009 / 55.444624 (-52.170615) | 2.635529 / 6.876477 (-4.240948) | 2.672960 / 2.142072 (0.530888) | 1.426487 / 4.805227 (-3.378740) | 0.253368 / 6.500664 (-6.247296) | 0.078650 / 0.075469 (0.003181) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.620265 / 1.841788 (-0.221523) | 17.674168 / 8.074308 (9.599860) | 21.120528 / 10.191392 (10.929136) | 0.244205 / 0.680424 (-0.436218) | 0.029646 / 0.534201 (-0.504555) | 0.510948 / 0.579283 (-0.068335) | 0.586255 / 0.434364 (0.151891) | 0.589286 / 0.540337 (0.048949) | 0.736561 / 1.386936 (-0.650375) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#de5fe9ae5df84c489e08dcbdc3d2d20272b312c3 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007778 / 0.011353 (-0.003575) | 0.005432 / 0.011008 (-0.005577) | 0.098776 / 0.038508 (0.060268) | 0.035196 / 0.023109 (0.012087) | 0.305646 / 0.275898 (0.029748) | 0.342661 / 0.323480 (0.019181) | 0.006513 / 0.007986 (-0.001472) | 0.005897 / 0.004328 (0.001568) | 0.075797 / 0.004250 (0.071547) | 0.056060 / 0.037052 (0.019007) | 0.306645 / 0.258489 (0.048156) | 0.352447 / 0.293841 (0.058606) | 0.037304 / 0.128546 (-0.091242) | 0.012514 / 0.075646 (-0.063132) | 0.334949 / 0.419271 (-0.084323) | 0.051600 / 0.043533 (0.008067) | 0.302302 / 0.255139 (0.047163) | 0.322238 / 0.283200 (0.039038) | 0.106896 / 0.141683 (-0.034787) | 1.483163 / 1.452155 (0.031008) | 1.587483 / 1.492716 (0.094767) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.292318 / 0.018006 (0.274312) | 0.541541 / 0.000490 (0.541051) | 0.008342 / 0.000200 (0.008142) | 0.000339 / 0.000054 (0.000285) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028287 / 0.037411 (-0.009124) | 0.107775 / 0.014526 (0.093250) | 0.119112 / 0.176557 (-0.057445) | 0.174002 / 0.737135 (-0.563134) | 0.126531 / 0.296338 (-0.169808) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.401684 / 0.215209 (0.186475) | 4.024708 / 2.077655 (1.947053) | 1.812763 / 1.504120 (0.308643) | 1.629540 / 1.541195 (0.088345) | 1.731733 / 1.468490 (0.263243) | 0.711066 / 4.584777 (-3.873711) | 3.867499 / 3.745712 (0.121786) | 3.615968 / 5.269862 (-1.653893) | 1.876077 / 4.565676 (-2.689600) | 0.087003 / 0.424275 (-0.337272) | 0.012445 / 0.007607 (0.004838) | 0.499106 / 0.226044 (0.273061) | 4.975920 / 2.268929 (2.706992) | 2.279074 / 55.444624 (-53.165550) | 1.952311 / 6.876477 (-4.924166) | 2.167480 / 2.142072 (0.025408) | 0.855882 / 4.805227 (-3.949346) | 0.171378 / 6.500664 (-6.329287) | 0.066731 / 0.075469 (-0.008738) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.184226 / 1.841788 (-0.657561) | 15.383396 / 8.074308 (7.309088) | 15.069783 / 10.191392 (4.878391) | 0.161489 / 0.680424 (-0.518935) | 0.017763 / 0.534201 (-0.516438) | 0.427103 / 0.579283 (-0.152180) | 0.434295 / 0.434364 (-0.000069) | 0.496848 / 0.540337 (-0.043489) | 0.592572 / 1.386936 (-0.794364) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008014 / 0.011353 (-0.003339) | 0.005607 / 0.011008 (-0.005401) | 0.076826 / 0.038508 (0.038318) | 0.035283 / 0.023109 (0.012174) | 0.347809 / 0.275898 (0.071911) | 0.382482 / 0.323480 (0.059003) | 0.006276 / 0.007986 (-0.001709) | 0.005978 / 0.004328 (0.001650) | 0.074938 / 0.004250 (0.070687) | 0.054323 / 0.037052 (0.017271) | 0.344027 / 0.258489 (0.085538) | 0.397623 / 0.293841 (0.103783) | 0.037851 / 0.128546 (-0.090695) | 0.012649 / 0.075646 (-0.062997) | 0.086169 / 0.419271 (-0.333103) | 0.051510 / 0.043533 (0.007977) | 0.341112 / 0.255139 (0.085973) | 0.357957 / 0.283200 (0.074757) | 0.110949 / 0.141683 (-0.030734) | 1.479573 / 1.452155 (0.027419) | 1.578572 / 1.492716 (0.085855) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.310678 / 0.018006 (0.292672) | 0.525504 / 0.000490 (0.525015) | 0.000447 / 0.000200 (0.000247) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031262 / 0.037411 (-0.006149) | 0.113801 / 0.014526 (0.099275) | 0.124967 / 0.176557 (-0.051590) | 0.175226 / 0.737135 (-0.561909) | 0.129377 / 0.296338 (-0.166962) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420672 / 0.215209 (0.205463) | 4.181337 / 2.077655 (2.103682) | 1.985524 / 1.504120 (0.481404) | 1.803468 / 1.541195 (0.262273) | 1.952915 / 1.468490 (0.484425) | 0.710928 / 4.584777 (-3.873849) | 3.886245 / 3.745712 (0.140533) | 3.737837 / 5.269862 (-1.532024) | 1.806859 / 4.565676 (-2.758818) | 0.088461 / 0.424275 (-0.335814) | 0.013125 / 0.007607 (0.005518) | 0.522410 / 0.226044 (0.296365) | 5.232591 / 2.268929 (2.963663) | 2.451188 / 55.444624 (-52.993437) | 2.127725 / 6.876477 (-4.748751) | 2.232859 / 2.142072 (0.090786) | 0.854257 / 4.805227 (-3.950970) | 0.171004 / 6.500664 (-6.329661) | 0.066724 / 0.075469 (-0.008746) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.257700 / 1.841788 (-0.584088) | 15.738605 / 8.074308 (7.664297) | 15.021698 / 10.191392 (4.830306) | 0.147422 / 0.680424 (-0.533002) | 0.017928 / 0.534201 (-0.516273) | 0.428121 / 0.579283 (-0.151162) | 0.432056 / 0.434364 (-0.002308) | 0.498318 / 0.540337 (-0.042020) | 0.591040 / 1.386936 (-0.795896) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1ac74267032ef3608779a8c8c4361b95a83ecbcb \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007014 / 0.011353 (-0.004339) | 0.004792 / 0.011008 (-0.006216) | 0.099822 / 0.038508 (0.061314) | 0.029333 / 0.023109 (0.006224) | 0.306453 / 0.275898 (0.030555) | 0.344598 / 0.323480 (0.021118) | 0.005121 / 0.007986 (-0.002865) | 0.004850 / 0.004328 (0.000522) | 0.076668 / 0.004250 (0.072417) | 0.039980 / 0.037052 (0.002927) | 0.312276 / 0.258489 (0.053787) | 0.354722 / 0.293841 (0.060881) | 0.031653 / 0.128546 (-0.096893) | 0.011743 / 0.075646 (-0.063903) | 0.322998 / 0.419271 (-0.096274) | 0.042813 / 0.043533 (-0.000720) | 0.308855 / 0.255139 (0.053716) | 0.332650 / 0.283200 (0.049451) | 0.087155 / 0.141683 (-0.054528) | 1.454946 / 1.452155 (0.002791) | 1.550589 / 1.492716 (0.057873) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.192921 / 0.018006 (0.174914) | 0.411155 / 0.000490 (0.410666) | 0.004779 / 0.000200 (0.004579) | 0.000071 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024462 / 0.037411 (-0.012950) | 0.100320 / 0.014526 (0.085794) | 0.105509 / 0.176557 (-0.071048) | 0.168533 / 0.737135 (-0.568602) | 0.110018 / 0.296338 (-0.186321) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.415025 / 0.215209 (0.199816) | 4.144583 / 2.077655 (2.066928) | 1.871627 / 1.504120 (0.367507) | 1.671638 / 1.541195 (0.130443) | 1.734458 / 1.468490 (0.265968) | 0.693435 / 4.584777 (-3.891342) | 3.487999 / 3.745712 (-0.257713) | 3.196553 / 5.269862 (-2.073308) | 1.628499 / 4.565676 (-2.937178) | 0.082999 / 0.424275 (-0.341276) | 0.012822 / 0.007607 (0.005215) | 0.514904 / 0.226044 (0.288860) | 5.157525 / 2.268929 (2.888596) | 2.313093 / 55.444624 (-53.131531) | 1.968335 / 6.876477 (-4.908142) | 2.083462 / 2.142072 (-0.058610) | 0.804485 / 4.805227 (-4.000742) | 0.152290 / 6.500664 (-6.348374) | 0.066813 / 0.075469 (-0.008656) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.210370 / 1.841788 (-0.631418) | 14.261779 / 8.074308 (6.187471) | 14.268121 / 10.191392 (4.076729) | 0.149216 / 0.680424 (-0.531207) | 0.016529 / 0.534201 (-0.517672) | 0.378814 / 0.579283 (-0.200469) | 0.386304 / 0.434364 (-0.048060) | 0.439653 / 0.540337 (-0.100684) | 0.523658 / 1.386936 (-0.863278) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006979 / 0.011353 (-0.004374) | 0.004718 / 0.011008 (-0.006290) | 0.077023 / 0.038508 (0.038514) | 0.029080 / 0.023109 (0.005971) | 0.343145 / 0.275898 (0.067247) | 0.380633 / 0.323480 (0.057153) | 0.006057 / 0.007986 (-0.001928) | 0.003541 / 0.004328 (-0.000788) | 0.075773 / 0.004250 (0.071523) | 0.039112 / 0.037052 (0.002060) | 0.342355 / 0.258489 (0.083866) | 0.386002 / 0.293841 (0.092161) | 0.033238 / 0.128546 (-0.095308) | 0.011696 / 0.075646 (-0.063950) | 0.086178 / 0.419271 (-0.333093) | 0.045219 / 0.043533 (0.001686) | 0.360710 / 0.255139 (0.105571) | 0.367490 / 0.283200 (0.084290) | 0.093041 / 0.141683 (-0.048642) | 1.523670 / 1.452155 (0.071516) | 1.595280 / 1.492716 (0.102564) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235888 / 0.018006 (0.217882) | 0.410205 / 0.000490 (0.409715) | 0.000405 / 0.000200 (0.000205) | 0.000059 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025752 / 0.037411 (-0.011659) | 0.103343 / 0.014526 (0.088818) | 0.108722 / 0.176557 (-0.067834) | 0.159241 / 0.737135 (-0.577894) | 0.113684 / 0.296338 (-0.182654) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.441809 / 0.215209 (0.226600) | 4.410893 / 2.077655 (2.333238) | 2.104061 / 1.504120 (0.599941) | 1.854016 / 1.541195 (0.312821) | 1.947100 / 1.468490 (0.478610) | 0.697682 / 4.584777 (-3.887095) | 3.467513 / 3.745712 (-0.278199) | 1.911603 / 5.269862 (-3.358258) | 1.187479 / 4.565676 (-3.378197) | 0.083153 / 0.424275 (-0.341122) | 0.012651 / 0.007607 (0.005044) | 0.542081 / 0.226044 (0.316036) | 5.444622 / 2.268929 (3.175693) | 2.524236 / 55.444624 (-52.920388) | 2.190463 / 6.876477 (-4.686014) | 2.265764 / 2.142072 (0.123691) | 0.810778 / 4.805227 (-3.994450) | 0.152459 / 6.500664 (-6.348205) | 0.067815 / 0.075469 (-0.007654) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.334388 / 1.841788 (-0.507400) | 14.640459 / 8.074308 (6.566151) | 14.714874 / 10.191392 (4.523482) | 0.153479 / 0.680424 (-0.526945) | 0.016709 / 0.534201 (-0.517492) | 0.379427 / 0.579283 (-0.199856) | 0.391602 / 0.434364 (-0.042762) | 0.438297 / 0.540337 (-0.102041) | 0.524170 / 1.386936 (-0.862766) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b277cef5cb56c0c506eda082fb69fddb839156a1 \"CML watermark\")\n" ]
"2023-03-21T14:18:09"
"2023-03-23T13:19:27"
"2023-03-23T13:12:25"
MEMBER
null
Following discussion at https://github.com/huggingface/datasets/pull/5589 Right now `to_iterable_dataset` on images/audio hurts iterable dataset performance a lot (e.g. x4 slower because it encodes+decodes images/audios unnecessarily). I fixed it by providing a generator that yields undecoded examples
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5655/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5655/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5655", "html_url": "https://github.com/huggingface/datasets/pull/5655", "diff_url": "https://github.com/huggingface/datasets/pull/5655.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5655.patch", "merged_at": "2023-03-23T13:12:25" }
true
https://api.github.com/repos/huggingface/datasets/issues/5654
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5654/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5654/comments
https://api.github.com/repos/huggingface/datasets/issues/5654/events
https://github.com/huggingface/datasets/issues/5654
1,633,523,705
I_kwDODunzps5hXZf5
5,654
Offset overflow when executing Dataset.map
{ "login": "jan-pair", "id": 118280608, "node_id": "U_kgDOBwzRoA", "avatar_url": "https://avatars.githubusercontent.com/u/118280608?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jan-pair", "html_url": "https://github.com/jan-pair", "followers_url": "https://api.github.com/users/jan-pair/followers", "following_url": "https://api.github.com/users/jan-pair/following{/other_user}", "gists_url": "https://api.github.com/users/jan-pair/gists{/gist_id}", "starred_url": "https://api.github.com/users/jan-pair/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jan-pair/subscriptions", "organizations_url": "https://api.github.com/users/jan-pair/orgs", "repos_url": "https://api.github.com/users/jan-pair/repos", "events_url": "https://api.github.com/users/jan-pair/events{/privacy}", "received_events_url": "https://api.github.com/users/jan-pair/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Upd. the above code works if we replace `25` with `1`, but the result value at key \"hr\" is not a tensor but a list of lists of lists of uint8.\r\n\r\nAdding `train_data.set_format(\"torch\")` after map fixes this, but the original issue remains\r\n\r\n", "As a workaround, one can replace\r\n`return {\"hr\": torch.stack([crop_transf(tensor) for _ in range(25)])}`\r\nwith\r\n`return {f\"hr_crop_{i}\": crop_transf(tensor) for i in range(25)}`\r\nand then choose appropriate crop randomly in further processing, but I still don't understand why the original approach doesn't work(\r\n" ]
"2023-03-21T09:33:27"
"2023-03-21T10:32:07"
null
NONE
null
### Describe the bug Hi, I'm trying to use `.map` method to cache multiple random crops from the image to speed up data processing during training, as the image size is too big. The map function executes all iterations, and then returns the following error: ```bash Traceback (most recent call last): File "/home/ubuntu/miniconda3/envs/enhancement/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3353, in _map_single writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file File "/home/ubuntu/miniconda3/envs/enhancement/lib/python3.8/site-packages/datasets/arrow_writer.py", line 582, in finalize self.write_examples_on_file() File "/home/ubuntu/miniconda3/envs/enhancement/lib/python3.8/site-packages/datasets/arrow_writer.py", line 446, in write_examples_on_file self.write_batch(batch_examples=batch_examples) File "/home/ubuntu/miniconda3/envs/enhancement/lib/python3.8/site-packages/datasets/arrow_writer.py", line 555, in write_batch self.write_table(pa_table, writer_batch_size) File "/home/ubuntu/miniconda3/envs/enhancement/lib/python3.8/site-packages/datasets/arrow_writer.py", line 567, in write_table pa_table = pa_table.combine_chunks() File "pyarrow/table.pxi", line 3315, in pyarrow.lib.Table.combine_chunks File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays ``` Here is the minimal code (`/home/datasets/DIV2K_train_HR` is just a folder of images that can be replaced by any appropriate): ### Steps to reproduce the bug ```python from glob import glob import torch from datasets import Dataset, Image from torchvision.transforms import PILToTensor, RandomCrop file_paths = glob("/home/datasets/DIV2K_train_HR/*") to_tensor = PILToTensor() crop_transf = RandomCrop(size=256) def prepare_data(example): tensor = to_tensor(example["image"].convert("RGB")) return {"hr": torch.stack([crop_transf(tensor) for _ in range(25)])} train_data = Dataset.from_dict({"image": file_paths}).cast_column("image", Image()) train_data = train_data.map( prepare_data, cache_file_name="/home/datasets/DIV2K_train_HR_crops.tmp", desc="Caching multiple random crops of image", remove_columns="image", ) print(train_data[0].keys(), train_data[0]["hr"].shape) ``` ### Expected behavior Cached file is stored at `"/home/datasets/DIV2K_train_HR_crops.tmp"`, output is `dict_keys(['hr']) torch.Size([25, 3, 256, 256])` ### Environment info - `datasets` version: 2.10.1 - Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.10 - Python version: 3.8.16 - PyArrow version: 11.0.0 - Pandas version: 1.5.3 - Pytorch version: 2.0.0+cu117 - torchvision version: 0.15.1+cu117
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5654/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5654/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5653
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5653/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5653/comments
https://api.github.com/repos/huggingface/datasets/issues/5653/events
https://github.com/huggingface/datasets/issues/5653
1,633,254,159
I_kwDODunzps5hWXsP
5,653
Doc: save_to_disk, `num_proc` will affect `num_shards`, but it's not documented
{ "login": "RmZeta2718", "id": 42400165, "node_id": "MDQ6VXNlcjQyNDAwMTY1", "avatar_url": "https://avatars.githubusercontent.com/u/42400165?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RmZeta2718", "html_url": "https://github.com/RmZeta2718", "followers_url": "https://api.github.com/users/RmZeta2718/followers", "following_url": "https://api.github.com/users/RmZeta2718/following{/other_user}", "gists_url": "https://api.github.com/users/RmZeta2718/gists{/gist_id}", "starred_url": "https://api.github.com/users/RmZeta2718/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RmZeta2718/subscriptions", "organizations_url": "https://api.github.com/users/RmZeta2718/orgs", "repos_url": "https://api.github.com/users/RmZeta2718/repos", "events_url": "https://api.github.com/users/RmZeta2718/events{/privacy}", "received_events_url": "https://api.github.com/users/RmZeta2718/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
closed
false
null
[]
null
[ "I agree this should be documented" ]
"2023-03-21T05:25:35"
"2023-03-24T16:36:23"
"2023-03-24T16:36:23"
NONE
null
### Describe the bug [`num_proc`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.DatasetDict.save_to_disk.num_proc) will affect `num_shards`, but it's not documented ### Steps to reproduce the bug Nothing to reproduce ### Expected behavior [document of `num_shards`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.DatasetDict.save_to_disk.num_shards) explicitly says that it depends on `max_shard_size`, it should also mention `num_proc`. ### Environment info datasets main document
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5653/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5653/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5652
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5652/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5652/comments
https://api.github.com/repos/huggingface/datasets/issues/5652/events
https://github.com/huggingface/datasets/pull/5652
1,632,546,073
PR_kwDODunzps5MeVUR
5,652
Copy features
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007455 / 0.011353 (-0.003898) | 0.005278 / 0.011008 (-0.005731) | 0.098981 / 0.038508 (0.060473) | 0.029208 / 0.023109 (0.006099) | 0.304132 / 0.275898 (0.028234) | 0.340010 / 0.323480 (0.016530) | 0.005514 / 0.007986 (-0.002472) | 0.003636 / 0.004328 (-0.000692) | 0.076737 / 0.004250 (0.072486) | 0.041985 / 0.037052 (0.004933) | 0.314941 / 0.258489 (0.056452) | 0.346686 / 0.293841 (0.052845) | 0.032528 / 0.128546 (-0.096018) | 0.011795 / 0.075646 (-0.063851) | 0.322122 / 0.419271 (-0.097150) | 0.051548 / 0.043533 (0.008015) | 0.310561 / 0.255139 (0.055422) | 0.329443 / 0.283200 (0.046243) | 0.092820 / 0.141683 (-0.048863) | 1.495764 / 1.452155 (0.043609) | 1.586734 / 1.492716 (0.094018) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.195830 / 0.018006 (0.177824) | 0.422075 / 0.000490 (0.421586) | 0.005483 / 0.000200 (0.005283) | 0.000133 / 0.000054 (0.000078) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023468 / 0.037411 (-0.013943) | 0.097713 / 0.014526 (0.083187) | 0.105331 / 0.176557 (-0.071225) | 0.166237 / 0.737135 (-0.570898) | 0.108924 / 0.296338 (-0.187415) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.671901 / 0.215209 (0.456692) | 6.745691 / 2.077655 (4.668036) | 2.132508 / 1.504120 (0.628388) | 1.693808 / 1.541195 (0.152614) | 1.715282 / 1.468490 (0.246792) | 0.955354 / 4.584777 (-3.629422) | 3.810296 / 3.745712 (0.064584) | 2.214891 / 5.269862 (-3.054970) | 1.461513 / 4.565676 (-3.104164) | 0.109846 / 0.424275 (-0.314430) | 0.013546 / 0.007607 (0.005939) | 0.780046 / 0.226044 (0.554001) | 7.789020 / 2.268929 (5.520091) | 2.602411 / 55.444624 (-52.842213) | 1.995096 / 6.876477 (-4.881380) | 2.009022 / 2.142072 (-0.133051) | 1.069215 / 4.805227 (-3.736012) | 0.179812 / 6.500664 (-6.320852) | 0.068125 / 0.075469 (-0.007344) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.201866 / 1.841788 (-0.639921) | 13.878814 / 8.074308 (5.804506) | 14.179264 / 10.191392 (3.987872) | 0.128908 / 0.680424 (-0.551515) | 0.017257 / 0.534201 (-0.516944) | 0.379500 / 0.579283 (-0.199783) | 0.393308 / 0.434364 (-0.041056) | 0.444700 / 0.540337 (-0.095638) | 0.531043 / 1.386936 (-0.855893) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007413 / 0.011353 (-0.003940) | 0.005431 / 0.011008 (-0.005577) | 0.078158 / 0.038508 (0.039650) | 0.028837 / 0.023109 (0.005728) | 0.343635 / 0.275898 (0.067737) | 0.383041 / 0.323480 (0.059561) | 0.005283 / 0.007986 (-0.002703) | 0.003673 / 0.004328 (-0.000655) | 0.076461 / 0.004250 (0.072211) | 0.038625 / 0.037052 (0.001573) | 0.341109 / 0.258489 (0.082620) | 0.387027 / 0.293841 (0.093186) | 0.032512 / 0.128546 (-0.096034) | 0.011903 / 0.075646 (-0.063744) | 0.086340 / 0.419271 (-0.332931) | 0.043211 / 0.043533 (-0.000321) | 0.339994 / 0.255139 (0.084855) | 0.370868 / 0.283200 (0.087668) | 0.091679 / 0.141683 (-0.050004) | 1.547188 / 1.452155 (0.095033) | 1.578545 / 1.492716 (0.085829) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216981 / 0.018006 (0.198975) | 0.412206 / 0.000490 (0.411716) | 0.004243 / 0.000200 (0.004043) | 0.000130 / 0.000054 (0.000075) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025392 / 0.037411 (-0.012020) | 0.102577 / 0.014526 (0.088051) | 0.107672 / 0.176557 (-0.068884) | 0.160657 / 0.737135 (-0.576478) | 0.111646 / 0.296338 (-0.184692) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.698815 / 0.215209 (0.483606) | 6.958931 / 2.077655 (4.881276) | 2.344216 / 1.504120 (0.840096) | 1.907752 / 1.541195 (0.366557) | 1.964251 / 1.468490 (0.495761) | 0.950754 / 4.584777 (-3.634023) | 3.829700 / 3.745712 (0.083988) | 3.055565 / 5.269862 (-2.214297) | 1.575851 / 4.565676 (-2.989825) | 0.109227 / 0.424275 (-0.315048) | 0.013163 / 0.007607 (0.005556) | 0.804613 / 0.226044 (0.578569) | 8.015035 / 2.268929 (5.746107) | 2.796358 / 55.444624 (-52.648266) | 2.212561 / 6.876477 (-4.663916) | 2.229918 / 2.142072 (0.087845) | 1.062041 / 4.805227 (-3.743186) | 0.181384 / 6.500664 (-6.319280) | 0.068564 / 0.075469 (-0.006905) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.287904 / 1.841788 (-0.553884) | 14.539222 / 8.074308 (6.464914) | 14.232097 / 10.191392 (4.040705) | 0.130870 / 0.680424 (-0.549554) | 0.016710 / 0.534201 (-0.517491) | 0.384454 / 0.579283 (-0.194829) | 0.391750 / 0.434364 (-0.042614) | 0.443995 / 0.540337 (-0.096343) | 0.526255 / 1.386936 (-0.860681) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#bd46874a580b888bdc82b53daace79884f04bc62 \"CML watermark\")\n", "Arf I need to fix some tests first - sorry", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008393 / 0.011353 (-0.002959) | 0.005635 / 0.011008 (-0.005373) | 0.114840 / 0.038508 (0.076332) | 0.039272 / 0.023109 (0.016163) | 0.352116 / 0.275898 (0.076218) | 0.386614 / 0.323480 (0.063134) | 0.006348 / 0.007986 (-0.001638) | 0.005872 / 0.004328 (0.001544) | 0.086437 / 0.004250 (0.082187) | 0.054003 / 0.037052 (0.016951) | 0.350302 / 0.258489 (0.091813) | 0.400148 / 0.293841 (0.106308) | 0.042436 / 0.128546 (-0.086111) | 0.013987 / 0.075646 (-0.061660) | 0.399434 / 0.419271 (-0.019837) | 0.059223 / 0.043533 (0.015690) | 0.354511 / 0.255139 (0.099372) | 0.377764 / 0.283200 (0.094564) | 0.112297 / 0.141683 (-0.029386) | 1.677483 / 1.452155 (0.225328) | 1.784942 / 1.492716 (0.292226) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.233334 / 0.018006 (0.215328) | 0.450575 / 0.000490 (0.450085) | 0.000376 / 0.000200 (0.000176) | 0.000068 / 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031995 / 0.037411 (-0.005416) | 0.126798 / 0.014526 (0.112272) | 0.138453 / 0.176557 (-0.038104) | 0.207360 / 0.737135 (-0.529775) | 0.147744 / 0.296338 (-0.148594) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.481496 / 0.215209 (0.266287) | 4.810495 / 2.077655 (2.732840) | 2.457917 / 1.504120 (0.953797) | 2.300073 / 1.541195 (0.758879) | 2.065595 / 1.468490 (0.597105) | 0.814589 / 4.584777 (-3.770188) | 4.566496 / 3.745712 (0.820784) | 2.386947 / 5.269862 (-2.882914) | 1.531639 / 4.565676 (-3.034037) | 0.099569 / 0.424275 (-0.324706) | 0.014971 / 0.007607 (0.007364) | 0.590359 / 0.226044 (0.364314) | 5.885250 / 2.268929 (3.616322) | 2.706799 / 55.444624 (-52.737826) | 2.324485 / 6.876477 (-4.551992) | 2.452751 / 2.142072 (0.310678) | 0.966955 / 4.805227 (-3.838272) | 0.198165 / 6.500664 (-6.302499) | 0.076877 / 0.075469 (0.001408) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.499085 / 1.841788 (-0.342702) | 17.705516 / 8.074308 (9.631208) | 16.481174 / 10.191392 (6.289782) | 0.191832 / 0.680424 (-0.488592) | 0.021417 / 0.534201 (-0.512784) | 0.519647 / 0.579283 (-0.059636) | 0.498432 / 0.434364 (0.064068) | 0.598206 / 0.540337 (0.057868) | 0.700990 / 1.386936 (-0.685946) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008746 / 0.011353 (-0.002607) | 0.006052 / 0.011008 (-0.004956) | 0.092938 / 0.038508 (0.054430) | 0.038932 / 0.023109 (0.015823) | 0.406919 / 0.275898 (0.131021) | 0.444325 / 0.323480 (0.120845) | 0.006735 / 0.007986 (-0.001251) | 0.005972 / 0.004328 (0.001643) | 0.088152 / 0.004250 (0.083902) | 0.051009 / 0.037052 (0.013957) | 0.407415 / 0.258489 (0.148926) | 0.481048 / 0.293841 (0.187207) | 0.043268 / 0.128546 (-0.085278) | 0.014574 / 0.075646 (-0.061072) | 0.103555 / 0.419271 (-0.315716) | 0.058251 / 0.043533 (0.014719) | 0.406294 / 0.255139 (0.151155) | 0.429229 / 0.283200 (0.146029) | 0.116977 / 0.141683 (-0.024705) | 1.765885 / 1.452155 (0.313730) | 1.885557 / 1.492716 (0.392841) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.284014 / 0.018006 (0.266008) | 0.458066 / 0.000490 (0.457576) | 0.022286 / 0.000200 (0.022086) | 0.000158 / 0.000054 (0.000104) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033971 / 0.037411 (-0.003440) | 0.132030 / 0.014526 (0.117504) | 0.141725 / 0.176557 (-0.034831) | 0.199818 / 0.737135 (-0.537318) | 0.149176 / 0.296338 (-0.147162) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.511463 / 0.215209 (0.296254) | 4.917921 / 2.077655 (2.840267) | 2.382377 / 1.504120 (0.878257) | 2.154599 / 1.541195 (0.613404) | 2.247858 / 1.468490 (0.779368) | 0.834524 / 4.584777 (-3.750253) | 4.560010 / 3.745712 (0.814297) | 2.403055 / 5.269862 (-2.866806) | 1.780784 / 4.565676 (-2.784893) | 0.101409 / 0.424275 (-0.322866) | 0.014657 / 0.007607 (0.007050) | 0.610137 / 0.226044 (0.384093) | 6.051011 / 2.268929 (3.782083) | 2.887357 / 55.444624 (-52.557267) | 2.518225 / 6.876477 (-4.358252) | 2.559654 / 2.142072 (0.417582) | 0.981226 / 4.805227 (-3.824001) | 0.197323 / 6.500664 (-6.303341) | 0.076851 / 0.075469 (0.001382) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.554662 / 1.841788 (-0.287126) | 18.038993 / 8.074308 (9.964685) | 16.719948 / 10.191392 (6.528556) | 0.195641 / 0.680424 (-0.484783) | 0.020699 / 0.534201 (-0.513502) | 0.498949 / 0.579283 (-0.080334) | 0.487775 / 0.434364 (0.053411) | 0.591413 / 0.540337 (0.051075) | 0.708520 / 1.386936 (-0.678416) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#39de0d78224c070be33d0820ec9203018fb721d1 \"CML watermark\")\n", "Ready for review @mariosasko :)", "Yea it does behave as expected, but modifying a dataset's features dict is not recommended and can lead to unpredictable behaviors. By copying the features, we make sure users don't modify the dataset's features dict.\r\n\r\nSince the attribute is public, users expect to be able to do whatever they want with it, without checking if they have to copy it or not", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008069 / 0.011353 (-0.003284) | 0.005051 / 0.011008 (-0.005958) | 0.096587 / 0.038508 (0.058079) | 0.032954 / 0.023109 (0.009844) | 0.317877 / 0.275898 (0.041979) | 0.328677 / 0.323480 (0.005197) | 0.005524 / 0.007986 (-0.002462) | 0.003958 / 0.004328 (-0.000370) | 0.072692 / 0.004250 (0.068441) | 0.044554 / 0.037052 (0.007502) | 0.311121 / 0.258489 (0.052632) | 0.355912 / 0.293841 (0.062071) | 0.035934 / 0.128546 (-0.092612) | 0.012056 / 0.075646 (-0.063590) | 0.332575 / 0.419271 (-0.086696) | 0.049788 / 0.043533 (0.006255) | 0.307918 / 0.255139 (0.052779) | 0.326757 / 0.283200 (0.043557) | 0.098671 / 0.141683 (-0.043012) | 1.424625 / 1.452155 (-0.027530) | 1.507944 / 1.492716 (0.015228) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207976 / 0.018006 (0.189970) | 0.439604 / 0.000490 (0.439114) | 0.000435 / 0.000200 (0.000235) | 0.000057 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026961 / 0.037411 (-0.010451) | 0.106627 / 0.014526 (0.092101) | 0.115292 / 0.176557 (-0.061264) | 0.171901 / 0.737135 (-0.565234) | 0.123276 / 0.296338 (-0.173062) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.407679 / 0.215209 (0.192469) | 4.071958 / 2.077655 (1.994303) | 1.854270 / 1.504120 (0.350151) | 1.678406 / 1.541195 (0.137211) | 1.715890 / 1.468490 (0.247400) | 0.705536 / 4.584777 (-3.879241) | 3.774198 / 3.745712 (0.028486) | 2.096429 / 5.269862 (-3.173432) | 1.431810 / 4.565676 (-3.133866) | 0.085557 / 0.424275 (-0.338718) | 0.012191 / 0.007607 (0.004584) | 0.502937 / 0.226044 (0.276893) | 5.034391 / 2.268929 (2.765463) | 2.393826 / 55.444624 (-53.050799) | 2.037383 / 6.876477 (-4.839094) | 2.192037 / 2.142072 (0.049964) | 0.829298 / 4.805227 (-3.975929) | 0.167781 / 6.500664 (-6.332883) | 0.063405 / 0.075469 (-0.012064) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.179189 / 1.841788 (-0.662599) | 14.464132 / 8.074308 (6.389824) | 14.869024 / 10.191392 (4.677632) | 0.172864 / 0.680424 (-0.507560) | 0.017817 / 0.534201 (-0.516384) | 0.427849 / 0.579283 (-0.151434) | 0.434447 / 0.434364 (0.000083) | 0.502077 / 0.540337 (-0.038260) | 0.599587 / 1.386936 (-0.787349) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007366 / 0.011353 (-0.003987) | 0.004939 / 0.011008 (-0.006069) | 0.074982 / 0.038508 (0.036474) | 0.032611 / 0.023109 (0.009501) | 0.340670 / 0.275898 (0.064772) | 0.372471 / 0.323480 (0.048991) | 0.005567 / 0.007986 (-0.002418) | 0.003956 / 0.004328 (-0.000372) | 0.074550 / 0.004250 (0.070300) | 0.047097 / 0.037052 (0.010045) | 0.337049 / 0.258489 (0.078560) | 0.391512 / 0.293841 (0.097671) | 0.035712 / 0.128546 (-0.092835) | 0.012040 / 0.075646 (-0.063606) | 0.087126 / 0.419271 (-0.332146) | 0.048290 / 0.043533 (0.004757) | 0.335069 / 0.255139 (0.079930) | 0.362080 / 0.283200 (0.078881) | 0.098606 / 0.141683 (-0.043077) | 1.456802 / 1.452155 (0.004647) | 1.554652 / 1.492716 (0.061936) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.200015 / 0.018006 (0.182009) | 0.442772 / 0.000490 (0.442283) | 0.004594 / 0.000200 (0.004394) | 0.000089 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028510 / 0.037411 (-0.008901) | 0.109654 / 0.014526 (0.095128) | 0.119921 / 0.176557 (-0.056636) | 0.170289 / 0.737135 (-0.566846) | 0.125288 / 0.296338 (-0.171051) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.430919 / 0.215209 (0.215710) | 4.274132 / 2.077655 (2.196478) | 2.014385 / 1.504120 (0.510265) | 1.822094 / 1.541195 (0.280899) | 1.938188 / 1.468490 (0.469698) | 0.707812 / 4.584777 (-3.876965) | 3.925730 / 3.745712 (0.180018) | 2.117481 / 5.269862 (-3.152381) | 1.369521 / 4.565676 (-3.196155) | 0.088414 / 0.424275 (-0.335861) | 0.013101 / 0.007607 (0.005494) | 0.538468 / 0.226044 (0.312424) | 5.384614 / 2.268929 (3.115685) | 2.487709 / 55.444624 (-52.956915) | 2.152060 / 6.876477 (-4.724417) | 2.225777 / 2.142072 (0.083705) | 0.856749 / 4.805227 (-3.948479) | 0.173299 / 6.500664 (-6.327366) | 0.068872 / 0.075469 (-0.006597) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.268009 / 1.841788 (-0.573778) | 15.102648 / 8.074308 (7.028340) | 14.216810 / 10.191392 (4.025418) | 0.163661 / 0.680424 (-0.516763) | 0.017394 / 0.534201 (-0.516807) | 0.418030 / 0.579283 (-0.161253) | 0.413717 / 0.434364 (-0.020647) | 0.487526 / 0.540337 (-0.052811) | 0.581499 / 1.386936 (-0.805437) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#46bb11e96d053c497035a2436702860de9960a65 \"CML watermark\")\n" ]
"2023-03-20T17:17:23"
"2023-03-23T13:19:19"
"2023-03-23T13:12:08"
MEMBER
null
Some users (even internally at HF) are doing ```python dset_features = dset.features dset_features.pop(col_to_remove) dset = dset.map(..., features=dset_features) ``` Right now this causes issues because it modifies the features dict in place before the map. In this PR I modified `dset.features` to return a copy of the features, so that users can modify it if they want.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5652/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5652/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5652", "html_url": "https://github.com/huggingface/datasets/pull/5652", "diff_url": "https://github.com/huggingface/datasets/pull/5652.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5652.patch", "merged_at": "2023-03-23T13:12:08" }
true
https://api.github.com/repos/huggingface/datasets/issues/5651
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5651/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5651/comments
https://api.github.com/repos/huggingface/datasets/issues/5651/events
https://github.com/huggingface/datasets/issues/5651
1,631,967,509
I_kwDODunzps5hRdkV
5,651
expanduser in save_to_disk
{ "login": "RmZeta2718", "id": 42400165, "node_id": "MDQ6VXNlcjQyNDAwMTY1", "avatar_url": "https://avatars.githubusercontent.com/u/42400165?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RmZeta2718", "html_url": "https://github.com/RmZeta2718", "followers_url": "https://api.github.com/users/RmZeta2718/followers", "following_url": "https://api.github.com/users/RmZeta2718/following{/other_user}", "gists_url": "https://api.github.com/users/RmZeta2718/gists{/gist_id}", "starred_url": "https://api.github.com/users/RmZeta2718/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RmZeta2718/subscriptions", "organizations_url": "https://api.github.com/users/RmZeta2718/orgs", "repos_url": "https://api.github.com/users/RmZeta2718/repos", "events_url": "https://api.github.com/users/RmZeta2718/events{/privacy}", "received_events_url": "https://api.github.com/users/RmZeta2718/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
closed
false
{ "login": "benjaminbrown038", "id": 35114142, "node_id": "MDQ6VXNlcjM1MTE0MTQy", "avatar_url": "https://avatars.githubusercontent.com/u/35114142?v=4", "gravatar_id": "", "url": "https://api.github.com/users/benjaminbrown038", "html_url": "https://github.com/benjaminbrown038", "followers_url": "https://api.github.com/users/benjaminbrown038/followers", "following_url": "https://api.github.com/users/benjaminbrown038/following{/other_user}", "gists_url": "https://api.github.com/users/benjaminbrown038/gists{/gist_id}", "starred_url": "https://api.github.com/users/benjaminbrown038/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/benjaminbrown038/subscriptions", "organizations_url": "https://api.github.com/users/benjaminbrown038/orgs", "repos_url": "https://api.github.com/users/benjaminbrown038/repos", "events_url": "https://api.github.com/users/benjaminbrown038/events{/privacy}", "received_events_url": "https://api.github.com/users/benjaminbrown038/received_events", "type": "User", "site_admin": false }
[ { "login": "benjaminbrown038", "id": 35114142, "node_id": "MDQ6VXNlcjM1MTE0MTQy", "avatar_url": "https://avatars.githubusercontent.com/u/35114142?v=4", "gravatar_id": "", "url": "https://api.github.com/users/benjaminbrown038", "html_url": "https://github.com/benjaminbrown038", "followers_url": "https://api.github.com/users/benjaminbrown038/followers", "following_url": "https://api.github.com/users/benjaminbrown038/following{/other_user}", "gists_url": "https://api.github.com/users/benjaminbrown038/gists{/gist_id}", "starred_url": "https://api.github.com/users/benjaminbrown038/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/benjaminbrown038/subscriptions", "organizations_url": "https://api.github.com/users/benjaminbrown038/orgs", "repos_url": "https://api.github.com/users/benjaminbrown038/repos", "events_url": "https://api.github.com/users/benjaminbrown038/events{/privacy}", "received_events_url": "https://api.github.com/users/benjaminbrown038/received_events", "type": "User", "site_admin": false } ]
null
[ "`save_to_disk` should indeed expand `~`. Marking it as a \"good first issue\".", "#self-assign\r\n\r\nFile path to code: \r\n\r\nhttps://github.com/huggingface/datasets/blob/2.13.0/src/datasets/arrow_dataset.py#L1364\r\n\r\n@RmZeta2718 I created a pull request for this issue. ", "Hello, \r\nIt says `save_to_disk` is deprecated in 2.8.0, so the alternative to this will be `storage_options`? \r\n\r\nhttps://huggingface.co/docs/datasets/package_reference/main_classes#datasets.Dataset.save_to_disk", "@ashikshafi08 I think you misunderstood the warning. The method `save_to_disk` is not deprecated only the optional parameter `fs`.\r\nAlso @benjaminbrown038 as I cannot find your PR I would like to work on this if you don't mind.", "@mariosasko It's been several months and the PR is not reviewed. Could you please take a look? I assume this is not complicated and could be merged fairly soon." ]
"2023-03-20T12:02:18"
"2023-10-27T14:04:37"
"2023-10-27T14:04:37"
NONE
null
### Describe the bug save_to_disk() does not expand `~` 1. `dataset = load_datasets("any dataset")` 2. `dataset.save_to_disk("~/data")` 3. a folder named "~" created in current folder 4. FileNotFoundError is raised, because the expanded path does not exist (`/home/<user>/data`) related issue https://github.com/huggingface/transformers/issues/10628 ### Steps to reproduce the bug As described above. ### Expected behavior expanduser correctly ### Environment info - datasets 2.10.1 - python 3.10
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5651/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5651/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5650
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5650/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5650/comments
https://api.github.com/repos/huggingface/datasets/issues/5650/events
https://github.com/huggingface/datasets/issues/5650
1,630,336,919
I_kwDODunzps5hLPeX
5,650
load_dataset can't work correct with my image data
{ "login": "WiNE-iNEFF", "id": 41611046, "node_id": "MDQ6VXNlcjQxNjExMDQ2", "avatar_url": "https://avatars.githubusercontent.com/u/41611046?v=4", "gravatar_id": "", "url": "https://api.github.com/users/WiNE-iNEFF", "html_url": "https://github.com/WiNE-iNEFF", "followers_url": "https://api.github.com/users/WiNE-iNEFF/followers", "following_url": "https://api.github.com/users/WiNE-iNEFF/following{/other_user}", "gists_url": "https://api.github.com/users/WiNE-iNEFF/gists{/gist_id}", "starred_url": "https://api.github.com/users/WiNE-iNEFF/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/WiNE-iNEFF/subscriptions", "organizations_url": "https://api.github.com/users/WiNE-iNEFF/orgs", "repos_url": "https://api.github.com/users/WiNE-iNEFF/repos", "events_url": "https://api.github.com/users/WiNE-iNEFF/events{/privacy}", "received_events_url": "https://api.github.com/users/WiNE-iNEFF/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Can you post a reproducible code snippet of what you tried to do?\r\n\r\n", "> Can you post a reproducible code snippet of what you tried to do?\n> \n> \n\n```python\nfrom datasets import load_dataset\n\ndataset = load_dataset(\"my_folder_name\", split=\"train\")\n```", "hi @WiNE-iNEFF ! can you please also tell a bit more about how your data is structured (directory structure and filenames patterns)?", "> hi @WiNE-iNEFF ! can you please also tell a bit more about how your data is structured (directory structure and filenames patterns)?\n\nAll file have format .png converted in RGBA. \nIn main folder \"MyData\" contain 4 folder with images. In function load_dataset i use folder \"MyData\"", "@WiNE-iNEFF I'm sorry there is still not enough information to answer your question :( For now I can only assume that your [filenames contain split names](https://huggingface.co/docs/datasets/repository_structure#splits-and-file-names) which are somehow incorrectly parsed. \r\nWhat would be the output if you omit `split` while loading? Like just\r\n```python\r\nds = load_dataset(\"MyData\")\r\nprint(ds)\r\n```\r\n\r\n", "> @WiNE-iNEFF I'm sorry there is still not enough information to answer your question :( For now I can only assume that your [filenames contain split names](https://huggingface.co/docs/datasets/repository_structure#splits-and-file-names) which are somehow incorrectly parsed. \n> What would be the output if you omit `split` while loading? Like just\n> ```python\n> ds = load_dataset(\"MyData\")\n> print(ds)\n> ```\n> \n> \n\n```python\nDataset({\n features: ['image', 'label'],\n num_rows: 4\n})\n```", "@WiNE-iNEFF My only guess is that 4 images in your data have `\"train\"` string in their names (something like `\"train_image_0.png\"`) and others do not and the loader ignores all the files that do not contain split name in filename. If it's true, please try to remove \"train\" from filenames. Or maybe they are inside a directory named \"train\", then the directory should be renamed (unless you want to put only these 4 specific images to the train but apparently you do not).\r\n\r\nIf there is a bug I cannot investigate it unfortunately because I cannot reproduce your case without some data samples. ", "> @WiNE-iNEFF My only guess is that 4 images in your data have `\"train\"` string in their names (something like `\"train_image_0.png\"`) and others do not and the loader ignores all the files that do not contain split name in filename. If it's true, please try to remove \"train\" from filenames. Or maybe they are inside a directory named \"train\", then the directory should be renamed (unless you want to put only these 4 specific images to the train but apparently you do not).\n> \n> If there is a bug I cannot investigate it unfortunately because I cannot reproduce your case without some data samples. \n\nI checked my files and some of them do have the words train, valid and test in their names, but the number of such images is more than 500, not 4.", "@WiNE-iNEFF Probably they are named inconsistently so that the correct pattern for which files should correspond to which split cannot be inferred. You can make it clearer to the loader by removing split names from filenames and putting files in separate folder for each split (you can take a look at the [documentation for imagefolder](https://huggingface.co/docs/datasets/image_dataset#imagefolder)):\r\n```\r\n Fuaimeanna2/\r\n├─ test\r\n│   ├─ label_0\r\n│   │   ├── filename_0.jpg\r\n│   │   └── filename_1.jpg\r\n│   │   └── ...\r\n│   ├─ label_1\r\n│   │   └── ...\r\n│   ├─ label_2\r\n│   │   └── ...\r\n│   └─ label_3\r\n│   └── ...\r\n├─ train\r\n│   ├─ label_0\r\n│   │   └── ...\r\n│   ├─ label_1\r\n│   │   └── ...\r\n│   ├─ label_2\r\n│   │   └── ...\r\n│   └─ label_3\r\n│   └── ...\r\n└── validation\r\n    ├─ label_0\r\n   │   └── ...\r\n    ├─ label_1\r\n   │   └── ...\r\n    ├─ label_2\r\n   │   └── ...\r\n └─ label_3\r\n └── ...\r\n```", "> @WiNE-iNEFF Probably they are named inconsistently so that the correct pattern for which files should correspond to which split cannot be inferred. You can make it clearer to the loader by removing split names from filenames and putting files in separate folder for each split (you can take a look at the [documentation for imagefolder](https://huggingface.co/docs/datasets/image_dataset#imagefolder)):\n> ```\n> Fuaimeanna2/\n> ├─ test\n> │   ├─ label_0\n> │   │   ├── filename_0.jpg\n> │   │   └── filename_1.jpg\n> │   │   └── ...\n> │   ├─ label_1\n> │   │   └── ...\n> │   ├─ label_2\n> │   │   └── ...\n> │   └─ label_3\n> │   └── ...\n> ├─ train\n> │   ├─ label_0\n> │   │   └── ...\n> │   ├─ label_1\n> │   │   └── ...\n> │   ├─ label_2\n> │   │   └── ...\n> │   └─ label_3\n> │   └── ...\n> └── validation\n>    ├─ label_0\n>    │   └── ...\n>    ├─ label_1\n>    │   └── ...\n>    ├─ label_2\n>    │   └── ...\n> └─ label_3\n> └── ...\n> ```\n\nI have read this documentation more than once. It just wasn't a problem before.", "Hi,\r\n\r\nYou need to use:\r\n```\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"imagefolder\", split=\"train\", data_dir=\"path_to_your_folder\")\r\n```\r\ninstead of \r\n```\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"my_folder_name\", split=\"train\")\r\n```\r\nTo create an image dataset from your local folders.", "> Hi,\r\n> \r\n> You need to use:\r\n> \r\n> ```\r\n> from datasets import load_dataset\r\n> \r\n> dataset = load_dataset(\"imagefolder\", split=\"train\", data_dir=\"path_to_your_folder\")\r\n> ```\r\n> \r\n> instead of\r\n> \r\n> ```\r\n> from datasets import load_dataset\r\n> \r\n> dataset = load_dataset(\"my_folder_name\", split=\"train\")\r\n> ```\r\n> \r\n> To create an image dataset from your local folders.\r\n\r\nThank you, but even using the method that you wrote above absolutely nothing changes, especially without using data_dir on my other data everything works fine", "@WiNE-iNEFF have you tried the suggestion I posted above? with removing split names from filenames and structuring files in folders? \r\n\r\n\r\n> even using the method that you wrote above absolutely nothing changes\r\n\r\nfyi - nothing changed because these two approaches are basically the same. it's just that when you pass your data directory as a dataset name (`load_dataset(\"my_folder_name\"`), not as `data_dir` (`load_dataset(\"imagefolder\", data_dir=\"my_folder_name\"`), `datasets` infers what module to use (`imagefolder` in your case) automatically, by file extensions.", "Oh I didn't know that! OK but in any case, not sure why the image builder isn't working for @WiNE-iNEFF. But it's hard for us to help if we can't reproduce. I'd just check the structure of the folders, see if the splits are correctly set up, etc.", "> @WiNE-iNEFF have you tried the suggestion I posted above? with removing split names from filenames and structuring files in folders? \n> \n> \n> > even using the method that you wrote above absolutely nothing changes\n> \n> fyi - nothing changed because these two approaches are basically the same. it's just that when you pass your data directory as a dataset name (`load_dataset(\"my_folder_name\"`), not as `data_dir` (`load_dataset(\"imagefolder\", data_dir=\"my_folder_name\"`), `datasets` infers what module to use (`imagefolder` in your case) automatically, by file extensions.\n\nI'll try to try your method over the next few days, then I'll write it turned out ", "> @WiNE-iNEFF have you tried the suggestion I posted above? with removing split names from filenames and structuring files in folders? \n> \n> \n> > even using the method that you wrote above absolutely nothing changes\n> \n> fyi - nothing changed because these two approaches are basically the same. it's just that when you pass your data directory as a dataset name (`load_dataset(\"my_folder_name\"`), not as `data_dir` (`load_dataset(\"imagefolder\", data_dir=\"my_folder_name\"`), `datasets` infers what module to use (`imagefolder` in your case) automatically, by file extensions.\n\nI tried creating a `train` folder and put my image folders in it. As a result, all 18,000 images were loaded. ", "@WiNE-iNEFF great! So to explain what happened according to my assumptions:\r\n\r\nWhen you use a standard packaged loader (like `imagefolder`, `csv`, `jsonl`, and so on) and load your data like `load_dataset(\"my_folder_name\")` or `load_dataset(\"imagefolder\", data_dir=\"my_folder_name\"`, the library searches for patterns to divide files into splits. This is described a bit in [this doc](https://huggingface.co/docs/datasets/v2.10.0/en/repository_structure#splits-and-file-names). And the order to search for patterns is the following:\r\n1. first it checks for [pattern like `data/<split_name>-xxxxx-of-xxxxx`](https://huggingface.co/docs/datasets/v2.10.0/en/repository_structure#custom-split-names) (which allows to pass custom split names)\r\n2. then for directories named as splits (if you have directories named `train`, `test` etc.)\r\n3. then for [splits in filenames](https://huggingface.co/docs/datasets/v2.10.0/en/repository_structure#splits-and-file-names) (like if you have files named `train-image.jpg`, `test_0.jpg`, ...)\r\n4. then if no pattern was found, it treats all files as belonging to a single `train` split\r\n\r\nThe code is [here](https://github.com/huggingface/datasets/blob/main/src/datasets/data_files.py#L215).\r\nSo I assume that in your case, since you didn't have directories for splits (pattern 2), some files that included split keywords (pattern 3) were included and others were ignored as not matching the pattern. And when you added `train` directory, the pattern for directories (pattern 2) was triggered first and everything worked as expected. Everything worked in your previous cases probably because you didn't have split names keywords in filenames, so all the files ended up being a part of a single train split (pattern 4).\r\n\r\nAnother way to mitigate this apart from structuring your data according to the patterns is to explicitly state with files belong to which splits by passing them with `data_files` parameter:\r\n```python\r\nload_dataset(\"my_folder_name\", data_files={\"train\": \"**\"}) # to tell that all files should be included \r\n```\r\n\r\nNow I see that this order should be explained in documentation and also referenced in sections for packaged modules like `imagefolder`, thank you for pointing this out. \r\n\r\n \r\n", "@NielsRogge @polinaeterna I have a similar problem when reading my dataset. I want to use DETR for object detection, but my data is in YOLO format. With a dataset of 10k images, yolo format involves having 10k labels. As far as I read regarding [COCO format](https://auto.gluon.ai/stable/tutorials/multimodal/object_detection/data_preparation/convert_data_to_coco_format.html), there must be one JSON per split. However, as I post in the [Hugging Face forum](https://discuss.huggingface.co/t/prepare-dataset-from-yolo-format-to-coco-for-detr/34894), when it is read, the number of rows is 1, which does not make sense. \r\nThe instruction to read the train-val-test splits are: \r\n```python\r\nfrom datasets import load_dataset\r\ndata_files = {\r\n\t\"train\": './train_labels.json',\r\n\t\"validation\": './val_labels.json',\r\n\t\"test\": './test_labels.json'\r\n}\r\ndataset = load_dataset(\"json\", data_files=data_files)\r\n```\r\nAn example of the short version of the json file I read, to reproduce my error, is the following: \r\n\r\n``` json\r\n{\r\n \"info\": {},\r\n \"licenses\": [],\r\n \"images\": [\r\n {\r\n \"id\": 1,\r\n \"file_name\": \"aceca_100.mp4frame21.png\",\r\n \"width\": 1280,\r\n \"height\": 720,\r\n \"pixel_values\": null,\r\n \"pixel_mask\": null\r\n },\r\n {\r\n \"id\": 2,\r\n \"file_name\": \"aceca_100.mp4frame24.png\",\r\n \"width\": 1280,\r\n \"height\": 720,\r\n \"pixel_values\": null,\r\n \"pixel_mask\": null\r\n },\r\n {\r\n \"id\": 3,\r\n \"file_name\": \"aceca_100.mp4frame25.png\",\r\n \"width\": 1280,\r\n \"height\": 720,\r\n \"pixel_values\": null,\r\n \"pixel_mask\": null}],\r\n \"annotations\": [\r\n {\r\n \"id\": 1,\r\n \"image_id\": 1,\r\n \"category_id\": 0,\r\n \"bbox\": [0.0, 278.21896388398557, 86.94096523844935, 156.0293445072134],\r\n \"area\": 13565.341816979679,\r\n \"iscrowd\": 0\r\n },\r\n {\r\n \"id\": 2,\r\n \"image_id\": 2,\r\n \"category_id\": 0,\r\n \"bbox\": [149.28851295721816, 297.6359759754418, 34.76802347007475, 98.03908698442889],\r\n \"area\": 3408.625277259324,\r\n \"iscrowd\": 0\r\n },\r\n {\r\n \"id\": 3,\r\n \"image_id\": 3,\r\n \"category_id\": 0,\r\n \"bbox\": [153.3817197549372, 300.168969412891, 31.787555842913775, 89.69583163436312],\r\n \"area\": 2851.2112569539095,\r\n \"iscrowd\": 0\r\n }\r\n ],\r\n \"categories\": [\r\n {\r\n \"id\": 0, \"name\": \"person\"\r\n }\r\n ]\r\n }\r\n```\r\nIf full files required, my email is aruigui98@gmail.com", "Hi @Alberto1404, to load an object detection dataset it's recommended to make use of the metadata feature as explained [here](https://huggingface.co/docs/datasets/image_dataset#object-detection). ", "Thank you @NielsRogge! It works!!!", "You can now refer to https://huggingface.co/docs/datasets/repository_structure to learn about the `datasets`' data files inference, so I'm closing this issue." ]
"2023-03-18T13:59:13"
"2023-07-24T14:13:02"
"2023-07-24T14:13:01"
NONE
null
I have about 20000 images in my folder which divided into 4 folders with class names. When i use load_dataset("my_folder_name", split="train") this function create dataset in which there are only 4 images, the remaining 19000 images were not added there. What is the problem and did not understand. Tried converting images and the like but absolutely nothing worked
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5650/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5650/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5649
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5649/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5649/comments
https://api.github.com/repos/huggingface/datasets/issues/5649/events
https://github.com/huggingface/datasets/issues/5649
1,630,173,460
I_kwDODunzps5hKnkU
5,649
The index column created with .to_sql() is dependent on the batch_size when writing
{ "login": "lsb", "id": 45281, "node_id": "MDQ6VXNlcjQ1Mjgx", "avatar_url": "https://avatars.githubusercontent.com/u/45281?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lsb", "html_url": "https://github.com/lsb", "followers_url": "https://api.github.com/users/lsb/followers", "following_url": "https://api.github.com/users/lsb/following{/other_user}", "gists_url": "https://api.github.com/users/lsb/gists{/gist_id}", "starred_url": "https://api.github.com/users/lsb/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lsb/subscriptions", "organizations_url": "https://api.github.com/users/lsb/orgs", "repos_url": "https://api.github.com/users/lsb/repos", "events_url": "https://api.github.com/users/lsb/events{/privacy}", "received_events_url": "https://api.github.com/users/lsb/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting, @lsb. \r\n\r\nWe are investigating it.\r\n\r\nOn the other hand, please note that in the next `datasets` release, the index will not be created by default (see #5583). If you would like to have it, you will need to explicitly pass `index=True`. ", "I think this is low enough priority for me to close this as Won't Fix. If I need any primary keys I can generate them beforehand. Feel free to reopen." ]
"2023-03-18T05:25:17"
"2023-06-17T07:01:57"
"2023-06-17T07:01:57"
NONE
null
### Describe the bug It seems like the "index" column is designed to be unique? The values are only unique per batch. The SQL index is not a unique index. This can be a problem, for instance, when building a faiss index on a dataset and then trying to match up ids with a sql export. ### Steps to reproduce the bug ``` from datasets import Dataset import sqlite3 db = sqlite3.connect(":memory:") nice_numbers = Dataset.from_dict({"nice_number": range(101,106)}) nice_numbers.to_sql("nice1", db, batch_size=1) nice_numbers.to_sql("nice2", db, batch_size=2) print(db.execute("select * from nice1").fetchall()) # [(0, 101), (0, 102), (0, 103), (0, 104), (0, 105)] print(db.execute("select * from nice2").fetchall()) # [(0, 101), (1, 102), (0, 103), (1, 104), (0, 105)] ``` ### Expected behavior I expected the "index" column to be unique ### Environment info ``` % datasets-cli env Copy-and-paste the text below in your GitHub issue. - `datasets` version: 2.10.1 - Platform: macOS-13.2.1-arm64-arm-64bit - Python version: 3.9.6 - PyArrow version: 7.0.0 - Pandas version: 1.5.2 zsh: segmentation fault datasets-cli env ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5649/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5649/timeline
null
not_planned
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5648
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5648/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5648/comments
https://api.github.com/repos/huggingface/datasets/issues/5648/events
https://github.com/huggingface/datasets/issues/5648
1,629,253,719
I_kwDODunzps5hHHBX
5,648
flatten_indices doesn't work with pandas format
{ "login": "alialamiidrissi", "id": 14365168, "node_id": "MDQ6VXNlcjE0MzY1MTY4", "avatar_url": "https://avatars.githubusercontent.com/u/14365168?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alialamiidrissi", "html_url": "https://github.com/alialamiidrissi", "followers_url": "https://api.github.com/users/alialamiidrissi/followers", "following_url": "https://api.github.com/users/alialamiidrissi/following{/other_user}", "gists_url": "https://api.github.com/users/alialamiidrissi/gists{/gist_id}", "starred_url": "https://api.github.com/users/alialamiidrissi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alialamiidrissi/subscriptions", "organizations_url": "https://api.github.com/users/alialamiidrissi/orgs", "repos_url": "https://api.github.com/users/alialamiidrissi/repos", "events_url": "https://api.github.com/users/alialamiidrissi/events{/privacy}", "received_events_url": "https://api.github.com/users/alialamiidrissi/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting! This can be fixed by setting the format to `arrow` in `flatten_indices` and restoring the original format after the flattening. I'm working on a PR that reduces the number of the `flatten_indices` calls in our codebase and makes `flatten_indices` a no-op when a dataset does not have an indices mapping, so I'll incorporate the fix in that PR." ]
"2023-03-17T12:44:25"
"2023-03-21T13:12:03"
null
NONE
null
### Describe the bug Hi, I noticed that `flatten_indices` throws an error when the batch format is `pandas`. This is probably due to the fact that flatten_indices uses map internally which doesn't accept dataframes as the transformation function output ### Steps to reproduce the bug tabular_data = pd.DataFrame(np.random.randn(10,10)) tabular_data = datasets.arrow_dataset.Dataset.from_pandas(tabular_data) tabular_data.with_format("pandas").select([0,1,2,3]).flatten_indices() ### Expected behavior No error thrown ### Environment info - `datasets` version: 2.10.1 - Python version: 3.9.5 - PyArrow version: 11.0.0 - Pandas version: 1.4.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5648/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5648/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5647
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5647/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5647/comments
https://api.github.com/repos/huggingface/datasets/issues/5647/events
https://github.com/huggingface/datasets/issues/5647
1,628,225,544
I_kwDODunzps5hDMAI
5,647
Make all print statements optional
{ "login": "gagan3012", "id": 49101362, "node_id": "MDQ6VXNlcjQ5MTAxMzYy", "avatar_url": "https://avatars.githubusercontent.com/u/49101362?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gagan3012", "html_url": "https://github.com/gagan3012", "followers_url": "https://api.github.com/users/gagan3012/followers", "following_url": "https://api.github.com/users/gagan3012/following{/other_user}", "gists_url": "https://api.github.com/users/gagan3012/gists{/gist_id}", "starred_url": "https://api.github.com/users/gagan3012/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gagan3012/subscriptions", "organizations_url": "https://api.github.com/users/gagan3012/orgs", "repos_url": "https://api.github.com/users/gagan3012/repos", "events_url": "https://api.github.com/users/gagan3012/events{/privacy}", "received_events_url": "https://api.github.com/users/gagan3012/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "related to #5444 ", "We now log these messages instead of printing them (addressed in #6019), so I'm closing this issue." ]
"2023-03-16T20:30:07"
"2023-07-21T14:20:25"
"2023-07-21T14:20:24"
NONE
null
### Feature request Make all print statements optional to speed up the development ### Motivation Im loading multiple tiny datasets and all the print statements make the loading slower ### Your contribution I can help contribute
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5647/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5647/timeline
null
completed
null
null
false