url
string
repository_url
string
labels_url
string
comments_url
string
events_url
string
html_url
string
id
int64
node_id
string
number
int64
title
string
user
dict
labels
list
state
string
locked
bool
assignee
dict
assignees
list
milestone
dict
comments
list
created_at
timestamp[ns, tz=UTC]
updated_at
timestamp[ns, tz=UTC]
closed_at
timestamp[ns, tz=UTC]
author_association
string
type
float64
active_lock_reason
float64
sub_issues_summary
dict
body
string
closed_by
dict
reactions
dict
timeline_url
string
performed_via_github_app
float64
state_reason
string
draft
float64
pull_request
dict
https://api.github.com/repos/huggingface/datasets/issues/6699
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6699/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6699/comments
https://api.github.com/repos/huggingface/datasets/issues/6699/events
https://github.com/huggingface/datasets/issues/6699
2,158,152,341
I_kwDODunzps6AosqV
6,699
`Dataset` unexpected changed dict data and may cause error
{ "avatar_url": "https://avatars.githubusercontent.com/u/16933298?v=4", "events_url": "https://api.github.com/users/scruel/events{/privacy}", "followers_url": "https://api.github.com/users/scruel/followers", "following_url": "https://api.github.com/users/scruel/following{/other_user}", "gists_url": "https://api.github.com/users/scruel/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/scruel", "id": 16933298, "login": "scruel", "node_id": "MDQ6VXNlcjE2OTMzMjk4", "organizations_url": "https://api.github.com/users/scruel/orgs", "received_events_url": "https://api.github.com/users/scruel/received_events", "repos_url": "https://api.github.com/users/scruel/repos", "site_admin": false, "starred_url": "https://api.github.com/users/scruel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/scruel/subscriptions", "type": "User", "url": "https://api.github.com/users/scruel", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "If `test.jsonl` contains more lines like:\r\n```\r\n{\"id\": 0, \"indexs\": {\"-1\": [0, 10]}}\r\n{\"id\": 1, \"indexs\": {\"-1\": [0, 10]}}\r\n{\"id\": 2, \"indexs\": {\"-2\": [0, 10]}}\r\n...\r\n{\"id\": n, \"indexs\": {\"-9999\": [0, 10]}}\r\n```\r\n\r\n`Dataset.from_json` will just raise an error:\r\n```\r\nAn error occurred while generating the dataset\r\nTypeError: Couldn't cast array of type\r\nstruct<-5942: list<item: int64>, -5943: list<item: int64>, -5944: list<item: int64>, -5945: list<item: int64>, -5946: list<item: int64>, -5947: list<item: int64>, -5948: list<item: int64>, -5949: list<item: int64>: ...\r\nto\r\n{... '-5312': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), '-5313': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)}\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/scruel/mambaforge/envs/vae/lib/python3.11/runpy.py\", line 198, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/scruel/mambaforge/envs/vae/lib/python3.11/runpy.py\", line 88, in _run_code\r\n exec(code, run_globals)\r\n File \"/home/scruel/.vscode-server/extensions/ms-python.debugpy-2024.0.0-linux-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py\", line 39, in <module>\r\n cli.main()\r\n File \"/home/scruel/.vscode-server/extensions/ms-python.debugpy-2024.0.0-linux-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py\", line 430, in main\r\n run()\r\n File \"/home/scruel/.vscode-server/extensions/ms-python.debugpy-2024.0.0-linux-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py\", line 284, in run_file\r\n runpy.run_path(target, run_name=\"__main__\")\r\n File \"/home/scruel/.vscode-server/extensions/ms-python.debugpy-2024.0.0-linux-x64/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py\", line 321, in run_path\r\n return _run_module_code(code, init_globals, run_name,\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/scruel/.vscode-server/extensions/ms-python.debugpy-2024.0.0-linux-x64/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py\", line 135, in _run_module_code\r\n _run_code(code, mod_globals, init_globals,\r\n File \"/home/scruel/.vscode-server/extensions/ms-python.debugpy-2024.0.0-linux-x64/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py\", line 124, in _run_code\r\n exec(code, run_globals)\r\n File \"/home/scruel/Code/Python/Working/llm-memory/data_reader.py\", line 120, in <module>\r\n reader = SnippetReader(jsonl_path, npy_path)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/scruel/Code/Python/Working/llm-memory/data_reader.py\", line 85, in __init__\r\n self._dataset = Dataset.from_json(jsonl_path, features=)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/scruel/mambaforge/envs/vae/lib/python3.11/site-packages/datasets/arrow_dataset.py\", line 1130, in from_json\r\n ).read()\r\n ^^^^^^\r\n File \"/home/scruel/mambaforge/envs/vae/lib/python3.11/site-packages/datasets/io/json.py\", line 59, in read\r\n self.builder.download_and_prepare(\r\n File \"/home/scruel/mambaforge/envs/vae/lib/python3.11/site-packages/datasets/builder.py\", line 1005, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/scruel/mambaforge/envs/vae/lib/python3.11/site-packages/datasets/builder.py\", line 1100, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/home/scruel/mambaforge/envs/vae/lib/python3.11/site-packages/datasets/builder.py\", line 1860, in _prepare_split\r\n for job_id, done, content in self._prepare_split_single(\r\n File \"/home/scruel/mambaforge/envs/vae/lib/python3.11/site-packages/datasets/builder.py\", line 2016, in _prepare_split_single\r\n raise DatasetGenerationError(\"An error occurred while generating the dataset\") from e\r\ndatasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset\r\n```", "Hi! Our JSON parser expects all examples/rows to share the same set of columns (applies to nested columns, too), hence the error. \r\n\r\nTo read the `index` column, we would have to manually cast the input to PyArrow's `pa.map_` type, but this requires a more thorough investigation, as `pa.map_` has limited support in PyArrow." ]
2024-02-28T05:30:10Z
2024-02-28T19:14:36Z
null
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug Will unexpected get keys with `None` value in the parsed json dict. ### Steps to reproduce the bug ```jsonl test.jsonl {"id": 0, "indexs": {"-1": [0, 10]}} {"id": 1, "indexs": {"-1": [0, 10]}} ``` ```python dataset = Dataset.from_json('.test.jsonl') print(dataset[0]) ``` Result: ``` {'id': 0, 'indexs': {'-1': [...], '-2': None, '-3': None, '-4': None, '-5': None, '-6': None, '-7': None, '-8': None, '-9': None, ...}} ``` Those keys with `None` value will unexpected appear in the dict. ### Expected behavior Result should be ``` {'id': 0, 'indexs': {'-1': [0, 10]}} ``` ### Environment info - `datasets` version: 2.16.1 - Platform: Linux-6.5.0-14-generic-x86_64-with-glibc2.35 - Python version: 3.11.6 - `huggingface_hub` version: 0.20.2 - PyArrow version: 14.0.2 - Pandas version: 2.1.4 - `fsspec` version: 2023.10.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6699/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6699/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/5764
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5764/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5764/comments
https://api.github.com/repos/huggingface/datasets/issues/5764/events
https://github.com/huggingface/datasets/issues/5764
1,670,740,198
I_kwDODunzps5jlXjm
5,764
ConnectionError: Couldn't reach https://www.dropbox.com/s/zts98j4vkqtsns6/aclImdb_v2.tar?dl=1
{ "avatar_url": "https://avatars.githubusercontent.com/u/109907638?v=4", "events_url": "https://api.github.com/users/sauravtii/events{/privacy}", "followers_url": "https://api.github.com/users/sauravtii/followers", "following_url": "https://api.github.com/users/sauravtii/following{/other_user}", "gists_url": "https://api.github.com/users/sauravtii/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sauravtii", "id": 109907638, "login": "sauravtii", "node_id": "U_kgDOBo0Otg", "organizations_url": "https://api.github.com/users/sauravtii/orgs", "received_events_url": "https://api.github.com/users/sauravtii/received_events", "repos_url": "https://api.github.com/users/sauravtii/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sauravtii/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sauravtii/subscriptions", "type": "User", "url": "https://api.github.com/users/sauravtii", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
null
[ "Thanks for reporting, @sauravtii.\r\n\r\nUnfortunately, I'm not able to reproduce the issue:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"josianem/imdb\")\r\n\r\nIn [2]: ds\r\nOut[2]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['text', 'label'],\r\n num_rows: 25799\r\n })\r\n test: Dataset({\r\n features: ['text', 'label'],\r\n num_rows: 25000\r\n })\r\n unsupervised: Dataset({\r\n features: ['text', 'label'],\r\n num_rows: 50000\r\n })\r\n})\r\n```\r\n\r\nCould you please retry to load the dataset? Maybe there was a temporary connection issue to Dropbox.", "Thanks @albertvillanova. I am facing another issue now\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"sample.py\", line 4, in <module>\r\n dataset = load_dataset(\"josianem/imdb\")\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/load.py\", line 1112, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py\", line 636, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py\", line 738, in _download_and_prepare\r\n verify_splits(self.info.splits, split_dict)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/info_utils.py\", line 74, in verify_splits\r\n raise NonMatchingSplitsSizesError(str(bad_splits))\r\ndatasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=34501348, num_examples=25799, dataset_name='imdb'), 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='imdb')}, {'expected': SplitInfo(name='test', num_bytes=32650697, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='test', num_bytes=0, num_examples=0, dataset_name='imdb')}, {'expected': SplitInfo(name='unsupervised', num_bytes=67106814, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')}]\r\n```\r\n\r\nThis is my code\r\n\r\n```\r\nfrom datasets import load_dataset, load_metric\r\n\r\ndataset = load_dataset(\"josianem/imdb\")\r\n```", "Your connection didn't work and you got an empty dataset (`num_bytes=0, num_examples=0`):\r\n```\r\ndatasets.utils.info_utils.NonMatchingSplitsSizesError: \r\n[\r\n {\r\n 'expected': SplitInfo(name='train', num_bytes=34501348, num_examples=25799, dataset_name='imdb'), \r\n 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='imdb')\r\n }, \r\n {\r\n 'expected': SplitInfo(name='test', num_bytes=32650697, num_examples=25000, dataset_name='imdb'), \r\n 'recorded': SplitInfo(name='test', num_bytes=0, num_examples=0, dataset_name='imdb')\r\n }, \r\n {\r\n 'expected': SplitInfo(name='unsupervised', num_bytes=67106814, num_examples=50000, dataset_name='imdb'), \r\n 'recorded': SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')\r\n }\r\n]\r\n```\r\n\r\nCould you please try the link in your browser and see if it works? https://www.dropbox.com/s/zts98j4vkqtsns6/aclImdb_v2.tar?dl=1\r\n- If it does not work, you should contact the author of the dataset in their Community tab (https://huggingface.co/datasets/josianem/imdb/discussions) and inform them, so that they can host their data elsewhere, for example on the Hugging Face Hub itself\r\n\r\nIf the link works, you should try to load the dataset but forcing the re-download of the data files (so that the cache is refreshed with the actual data file), by passing `download_mode=\"force_redownload\"`:\r\n```python\r\ndataset = load_dataset(\"josianem/imdb\", download_mode=\"force_redownload\")\r\n```", "After pasting the link in the browser, it did start the download so it seems that the link is working. But even after including the `download_mode` in my code I am facing the same issue:\r\n\r\nError:\r\n```\r\nTraceback (most recent call last):\r\n File \"sample.py\", line 4, in <module>\r\n dataset = load_dataset(\"josianem/imdb\", download_mode=\"force_redownload\")\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/load.py\", line 1112, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py\", line 636, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py\", line 704, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"/home/saurav/.cache/huggingface/modules/datasets_modules/datasets/imdb/cc6ab4acab2799be15d5d217c24548b856156dafdc850165fdc4f2031f27ff2f/imdb.py\", line 79, in _split_generators\r\n archive = dl_manager.download(_DOWNLOAD_URL)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/download_manager.py\", line 196, in download\r\n downloaded_path_or_paths = map_nested(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/py_utils.py\", line 197, in map_nested\r\n return function(data_struct)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/download_manager.py\", line 217, in _download\r\n return cached_path(url_or_filename, download_config=download_config)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py\", line 289, in cached_path\r\n output_path = get_from_cache(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py\", line 606, in get_from_cache\r\n raise ConnectionError(\"Couldn't reach {}\".format(url))\r\nConnectionError: Couldn't reach https://www.dropbox.com/s/zts98j4vkqtsns6/aclImdb_v2.tar?dl=1\r\n```\r\n\r\nMy code:\r\n```\r\nfrom datasets import load_dataset, load_metric\r\n\r\ndataset = load_dataset(\"josianem/imdb\", download_mode=\"force_redownload\")\r\n```", "I have tried again to reproduce your issue without success: the dataset loads perfectly, both in my local machine and in a Colab notebook.\r\n- See: https://colab.research.google.com/drive/1dky3T0XGFuldggy22NNQQN-UqOFqvnuY?usp=sharing\r\n\r\nI think the cause maight be that you are using a very old version of `datasets`. Please, could you update it and retry?\r\n```\r\npip install -U datasets\r\n```", "That worked!! Thanks @albertvillanova : )\r\n\r\n```\r\nDownloading builder script: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 4.20k/4.20k [00:00<00:00, 6.69MB/s]\r\nDownloading metadata: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2.60k/2.60k [00:00<00:00, 3.41MB/s]\r\nDownloading readme: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 7.52k/7.52k [00:00<00:00, 12.6MB/s]\r\nDownloading and preparing dataset imdb/plain_text to /home/saurav/.cache/huggingface/datasets/josianem___imdb/plain_text/1.0.0/cc6ab4acab2799be15d5d217c24548b856156dafdc850165fdc4f2031f27ff2f...\r\nDownloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 301M/301M [01:32<00:00, 3.25MB/s]\r\nDataset imdb downloaded and prepared to /home/saurav/.cache/huggingface/datasets/josianem___imdb/plain_text/1.0.0/cc6ab4acab2799be15d5d217c24548b856156dafdc850165fdc4f2031f27ff2f. Subsequent calls will reuse this data.\r\n100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:00<00:00, 794.83it/s]\r\n```\r\n\r\nThe code I used:\r\n```\r\nfrom datasets import load_dataset, load_metric\r\n\r\ndataset = load_dataset(\"josianem/imdb\", download_mode=\"force_redownload\")\r\n\r\n```\r\n\r\nBut when I remove `download_mode=\"force_redownload\"` I get the same error. Any guess on that?", "That is because the cache got the \"empty\" download file the first time you tried and got the connection error.\r\n\r\nThen, once you no longer get the connection error, you need to refresh the cache by passing `download_mode=\"force_redownload\"`." ]
2023-04-17T09:08:18Z
2023-04-18T07:18:20Z
2023-04-18T07:18:20Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug I want to use this (https://huggingface.co/datasets/josianem/imdb) dataset therefore I am trying to load it using the following code: ``` dataset = load_dataset("josianem/imdb") ``` The dataset is not getting loaded and gives the error message as the following: ``` Traceback (most recent call last): File "sample.py", line 3, in <module> dataset = load_dataset("josianem/imdb") File "/home/saurav/.local/lib/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset builder_instance.download_and_prepare( File "/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare self._download_and_prepare( File "/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py", line 704, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/saurav/.cache/huggingface/modules/datasets_modules/datasets/imdb/cc6ab4acab2799be15d5d217c24548b856156dafdc850165fdc4f2031f27ff2f/imdb.py", line 79, in _split_generators archive = dl_manager.download(_DOWNLOAD_URL) File "/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 196, in download downloaded_path_or_paths = map_nested( File "/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 197, in map_nested return function(data_struct) File "/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 217, in _download return cached_path(url_or_filename, download_config=download_config) File "/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 289, in cached_path output_path = get_from_cache( File "/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 606, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://www.dropbox.com/s/zts98j4vkqtsns6/aclImdb_v2.tar?dl=1 ``` ### Steps to reproduce the bug You can reproduce the error by using the following code: ``` from datasets import load_dataset, load_metric dataset = load_dataset("josianem/imdb") ``` ### Expected behavior The dataset should get loaded (I am using this dataset for the first time so not much aware of the exact behavior). ### Environment info - `datasets` version: 1.12.1 - Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 11.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5764/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5764/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/6675
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6675/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6675/comments
https://api.github.com/repos/huggingface/datasets/issues/6675/events
https://github.com/huggingface/datasets/issues/6675
2,139,640,381
I_kwDODunzps5_iFI9
6,675
Allow image model (color conversion) to be specified as part of datasets Image() decode
{ "avatar_url": "https://avatars.githubusercontent.com/u/5702664?v=4", "events_url": "https://api.github.com/users/rwightman/events{/privacy}", "followers_url": "https://api.github.com/users/rwightman/followers", "following_url": "https://api.github.com/users/rwightman/following{/other_user}", "gists_url": "https://api.github.com/users/rwightman/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rwightman", "id": 5702664, "login": "rwightman", "node_id": "MDQ6VXNlcjU3MDI2NjQ=", "organizations_url": "https://api.github.com/users/rwightman/orgs", "received_events_url": "https://api.github.com/users/rwightman/received_events", "repos_url": "https://api.github.com/users/rwightman/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rwightman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rwightman/subscriptions", "type": "User", "url": "https://api.github.com/users/rwightman", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "It would be a great addition indeed :)\r\n\r\nThis can be implemented the same way we have `sampling_rate` for Audio(): we just add a new parameter to the Image() type and take this parameter into account in `Image.decode_example`\r\n\r\nEDIT: adding an example of how it can be used:\r\n\r\n```python\r\nds = ds.cast_column(\"image\", Image(mode=...))\r\n```" ]
2024-02-16T23:43:20Z
2024-03-18T15:41:34Z
2024-03-18T15:41:34Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Feature request Typical torchvision / torch Datasets in image applications apply color conversion in the Dataset portion of the code as part of image decode, separately from the image transform stack. This is true for PIL.Image where convert is usually called in dataset, for native torchvision https://pytorch.org/vision/main/generated/torchvision.io.decode_jpeg.html, and similarly in tensorflow.data pipelines decode_jpeg or https://www.tensorflow.org/api_docs/python/tf/io/decode_and_crop_jpeg have a channels arg that allows controlling the image mode in the decode step. datasets currently requires this pattern (from [examples](https://huggingface.co/docs/datasets/main/en/image_process)): ``` from torchvision.transforms import Compose, ColorJitter, ToTensor jitter = Compose( [ ColorJitter(brightness=0.25, contrast=0.25, saturation=0.25, hue=0.7), ToTensor(), ] ) def transforms(examples): examples["pixel_values"] = [jitter(image.convert("RGB")) for image in examples["image"]] return examples ``` ### Motivation It would be nice to be able to handle `image.convert("RGB")` (or other modes) in the decode step, before applying torchvision transforms, this would reduce differences in code when handling pipelines that can handle torchvision, webdatset, or hf datasets with fewer code differences and without needing to handle image mode argument passing in two different stages of the pipelines... ### Your contribution Can do a PR with guidance on how mode should be passed / set on the dataset.
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/6675/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6675/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/5379
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5379/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5379/comments
https://api.github.com/repos/huggingface/datasets/issues/5379/events
https://github.com/huggingface/datasets/pull/5379
1,504,010,639
PR_kwDODunzps5F1r2k
5,379
feat: depth estimation dataset guide.
{ "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sayakpaul", "id": 22957388, "login": "sayakpaul", "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "repos_url": "https://api.github.com/users/sayakpaul/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "type": "User", "url": "https://api.github.com/users/sayakpaul", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sayakpaul", "id": 22957388, "login": "sayakpaul", "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "repos_url": "https://api.github.com/users/sayakpaul/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "type": "User", "url": "https://api.github.com/users/sayakpaul", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sayakpaul", "id": 22957388, "login": "sayakpaul", "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "repos_url": "https://api.github.com/users/sayakpaul/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "type": "User", "url": "https://api.github.com/users/sayakpaul", "user_view_type": "public" } ]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks for the changes, looks good to me!", "@stevhliu I have pushed some quality improvements both in terms of code and content. Would you be able to re-review? ", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008325 / 0.011353 (-0.003028) | 0.004432 / 0.011008 (-0.006576) | 0.099794 / 0.038508 (0.061286) | 0.029469 / 0.023109 (0.006360) | 0.306554 / 0.275898 (0.030656) | 0.367373 / 0.323480 (0.043893) | 0.007532 / 0.007986 (-0.000454) | 0.003310 / 0.004328 (-0.001018) | 0.077453 / 0.004250 (0.073203) | 0.034836 / 0.037052 (-0.002216) | 0.311696 / 0.258489 (0.053207) | 0.349683 / 0.293841 (0.055842) | 0.033089 / 0.128546 (-0.095457) | 0.011339 / 0.075646 (-0.064307) | 0.321699 / 0.419271 (-0.097573) | 0.040213 / 0.043533 (-0.003320) | 0.304741 / 0.255139 (0.049602) | 0.331569 / 0.283200 (0.048369) | 0.090397 / 0.141683 (-0.051285) | 1.526001 / 1.452155 (0.073847) | 1.558863 / 1.492716 (0.066146) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.179446 / 0.018006 (0.161440) | 0.416308 / 0.000490 (0.415818) | 0.002390 / 0.000200 (0.002190) | 0.000075 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023641 / 0.037411 (-0.013770) | 0.096672 / 0.014526 (0.082147) | 0.104330 / 0.176557 (-0.072227) | 0.146338 / 0.737135 (-0.590797) | 0.108278 / 0.296338 (-0.188060) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420194 / 0.215209 (0.204985) | 4.196981 / 2.077655 (2.119326) | 1.861206 / 1.504120 (0.357086) | 1.658748 / 1.541195 (0.117554) | 1.704309 / 1.468490 (0.235819) | 0.691639 / 4.584777 (-3.893138) | 3.346303 / 3.745712 (-0.399409) | 1.932962 / 5.269862 (-3.336900) | 1.299395 / 4.565676 (-3.266281) | 0.081869 / 0.424275 (-0.342406) | 0.012415 / 0.007607 (0.004808) | 0.530805 / 0.226044 (0.304761) | 5.293486 / 2.268929 (3.024558) | 2.328327 / 55.444624 (-53.116297) | 1.964956 / 6.876477 (-4.911521) | 2.002793 / 2.142072 (-0.139280) | 0.813380 / 4.805227 (-3.991847) | 0.150030 / 6.500664 (-6.350634) | 0.065194 / 0.075469 (-0.010275) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.259421 / 1.841788 (-0.582367) | 13.667796 / 8.074308 (5.593488) | 13.819121 / 10.191392 (3.627729) | 0.136718 / 0.680424 (-0.543706) | 0.028510 / 0.534201 (-0.505691) | 0.402246 / 0.579283 (-0.177037) | 0.405279 / 0.434364 (-0.029085) | 0.467185 / 0.540337 (-0.073153) | 0.554213 / 1.386936 (-0.832723) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006738 / 0.011353 (-0.004615) | 0.004616 / 0.011008 (-0.006393) | 0.096978 / 0.038508 (0.058470) | 0.027750 / 0.023109 (0.004640) | 0.411505 / 0.275898 (0.135607) | 0.441796 / 0.323480 (0.118316) | 0.005073 / 0.007986 (-0.002913) | 0.003360 / 0.004328 (-0.000968) | 0.074445 / 0.004250 (0.070194) | 0.040654 / 0.037052 (0.003602) | 0.414277 / 0.258489 (0.155788) | 0.448665 / 0.293841 (0.154824) | 0.032346 / 0.128546 (-0.096200) | 0.011533 / 0.075646 (-0.064114) | 0.317349 / 0.419271 (-0.101923) | 0.041934 / 0.043533 (-0.001599) | 0.409102 / 0.255139 (0.153963) | 0.429977 / 0.283200 (0.146777) | 0.089459 / 0.141683 (-0.052224) | 1.518127 / 1.452155 (0.065973) | 1.569902 / 1.492716 (0.077186) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.232648 / 0.018006 (0.214642) | 0.413751 / 0.000490 (0.413261) | 0.000404 / 0.000200 (0.000204) | 0.000057 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025468 / 0.037411 (-0.011943) | 0.098195 / 0.014526 (0.083669) | 0.108882 / 0.176557 (-0.067674) | 0.150059 / 0.737135 (-0.587076) | 0.110742 / 0.296338 (-0.185597) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.445326 / 0.215209 (0.230117) | 4.449200 / 2.077655 (2.371545) | 2.098939 / 1.504120 (0.594819) | 1.861207 / 1.541195 (0.320012) | 1.901385 / 1.468490 (0.432894) | 0.695287 / 4.584777 (-3.889490) | 3.461775 / 3.745712 (-0.283938) | 2.998566 / 5.269862 (-2.271296) | 1.555036 / 4.565676 (-3.010641) | 0.082789 / 0.424275 (-0.341486) | 0.012772 / 0.007607 (0.005165) | 0.564855 / 0.226044 (0.338811) | 5.631049 / 2.268929 (3.362120) | 2.543771 / 55.444624 (-52.900854) | 2.194378 / 6.876477 (-4.682099) | 2.267168 / 2.142072 (0.125095) | 0.803330 / 4.805227 (-4.001898) | 0.151336 / 6.500664 (-6.349328) | 0.067015 / 0.075469 (-0.008454) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.298422 / 1.841788 (-0.543366) | 13.933637 / 8.074308 (5.859329) | 13.570848 / 10.191392 (3.379456) | 0.150787 / 0.680424 (-0.529637) | 0.016911 / 0.534201 (-0.517290) | 0.384771 / 0.579283 (-0.194512) | 0.397505 / 0.434364 (-0.036858) | 0.450931 / 0.540337 (-0.089406) | 0.534501 / 1.386936 (-0.852435) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png \"CML watermark\")\n", "@lhoestq @nateraw made some changes as per the comments. PTAL and approve as necessary. ", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009037 / 0.011353 (-0.002316) | 0.004970 / 0.011008 (-0.006038) | 0.099223 / 0.038508 (0.060715) | 0.034935 / 0.023109 (0.011826) | 0.297027 / 0.275898 (0.021129) | 0.352861 / 0.323480 (0.029382) | 0.007558 / 0.007986 (-0.000427) | 0.003903 / 0.004328 (-0.000425) | 0.075663 / 0.004250 (0.071413) | 0.042577 / 0.037052 (0.005524) | 0.307182 / 0.258489 (0.048693) | 0.344237 / 0.293841 (0.050396) | 0.041438 / 0.128546 (-0.087108) | 0.012159 / 0.075646 (-0.063487) | 0.333771 / 0.419271 (-0.085501) | 0.047847 / 0.043533 (0.004314) | 0.290797 / 0.255139 (0.035658) | 0.320517 / 0.283200 (0.037318) | 0.098334 / 0.141683 (-0.043349) | 1.446187 / 1.452155 (-0.005968) | 1.495506 / 1.492716 (0.002789) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203704 / 0.018006 (0.185698) | 0.441325 / 0.000490 (0.440835) | 0.001173 / 0.000200 (0.000973) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026694 / 0.037411 (-0.010718) | 0.103819 / 0.014526 (0.089294) | 0.116377 / 0.176557 (-0.060179) | 0.158280 / 0.737135 (-0.578856) | 0.119797 / 0.296338 (-0.176541) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.405723 / 0.215209 (0.190514) | 4.047633 / 2.077655 (1.969979) | 1.805652 / 1.504120 (0.301532) | 1.611382 / 1.541195 (0.070187) | 1.663117 / 1.468490 (0.194627) | 0.692589 / 4.584777 (-3.892188) | 3.689970 / 3.745712 (-0.055742) | 2.089760 / 5.269862 (-3.180101) | 1.450576 / 4.565676 (-3.115101) | 0.085276 / 0.424275 (-0.338999) | 0.012042 / 0.007607 (0.004434) | 0.513159 / 0.226044 (0.287115) | 5.123235 / 2.268929 (2.854306) | 2.281864 / 55.444624 (-53.162761) | 1.926170 / 6.876477 (-4.950307) | 2.035093 / 2.142072 (-0.106979) | 0.857457 / 4.805227 (-3.947770) | 0.166088 / 6.500664 (-6.334576) | 0.062115 / 0.075469 (-0.013354) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.197776 / 1.841788 (-0.644012) | 14.674452 / 8.074308 (6.600144) | 14.275990 / 10.191392 (4.084598) | 0.170848 / 0.680424 (-0.509576) | 0.028613 / 0.534201 (-0.505588) | 0.438650 / 0.579283 (-0.140633) | 0.439323 / 0.434364 (0.004959) | 0.515090 / 0.540337 (-0.025247) | 0.614216 / 1.386936 (-0.772720) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007159 / 0.011353 (-0.004194) | 0.005142 / 0.011008 (-0.005866) | 0.096953 / 0.038508 (0.058445) | 0.033036 / 0.023109 (0.009927) | 0.391790 / 0.275898 (0.115892) | 0.427120 / 0.323480 (0.103640) | 0.005691 / 0.007986 (-0.002294) | 0.004848 / 0.004328 (0.000519) | 0.072258 / 0.004250 (0.068008) | 0.049017 / 0.037052 (0.011965) | 0.387267 / 0.258489 (0.128778) | 0.437112 / 0.293841 (0.143272) | 0.036360 / 0.128546 (-0.092186) | 0.012249 / 0.075646 (-0.063397) | 0.336246 / 0.419271 (-0.083025) | 0.048777 / 0.043533 (0.005244) | 0.397872 / 0.255139 (0.142733) | 0.399768 / 0.283200 (0.116568) | 0.101283 / 0.141683 (-0.040400) | 1.443999 / 1.452155 (-0.008156) | 1.575496 / 1.492716 (0.082779) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220952 / 0.018006 (0.202946) | 0.442220 / 0.000490 (0.441730) | 0.000406 / 0.000200 (0.000206) | 0.000058 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028626 / 0.037411 (-0.008786) | 0.109929 / 0.014526 (0.095403) | 0.120989 / 0.176557 (-0.055568) | 0.157377 / 0.737135 (-0.579758) | 0.125522 / 0.296338 (-0.170816) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436565 / 0.215209 (0.221356) | 4.380771 / 2.077655 (2.303117) | 2.200003 / 1.504120 (0.695883) | 2.013289 / 1.541195 (0.472094) | 2.052658 / 1.468490 (0.584168) | 0.703706 / 4.584777 (-3.881071) | 3.823289 / 3.745712 (0.077577) | 2.064882 / 5.269862 (-3.204980) | 1.330834 / 4.565676 (-3.234842) | 0.085945 / 0.424275 (-0.338330) | 0.012511 / 0.007607 (0.004904) | 0.544171 / 0.226044 (0.318127) | 5.476059 / 2.268929 (3.207130) | 2.695586 / 55.444624 (-52.749039) | 2.330239 / 6.876477 (-4.546238) | 2.429290 / 2.142072 (0.287218) | 0.843154 / 4.805227 (-3.962073) | 0.169334 / 6.500664 (-6.331330) | 0.064261 / 0.075469 (-0.011209) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.268344 / 1.841788 (-0.573444) | 14.934342 / 8.074308 (6.860034) | 13.555389 / 10.191392 (3.363997) | 0.142725 / 0.680424 (-0.537699) | 0.017891 / 0.534201 (-0.516310) | 0.424833 / 0.579283 (-0.154450) | 0.420035 / 0.434364 (-0.014329) | 0.491009 / 0.540337 (-0.049329) | 0.586953 / 1.386936 (-0.799983) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png \"CML watermark\")\n", "Merging this PR with approvals from @stevhliu @lhoestq. ", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008586 / 0.011353 (-0.002767) | 0.004659 / 0.011008 (-0.006350) | 0.100343 / 0.038508 (0.061835) | 0.029861 / 0.023109 (0.006751) | 0.301090 / 0.275898 (0.025192) | 0.369528 / 0.323480 (0.046048) | 0.006920 / 0.007986 (-0.001065) | 0.003513 / 0.004328 (-0.000815) | 0.078514 / 0.004250 (0.074263) | 0.035285 / 0.037052 (-0.001767) | 0.311257 / 0.258489 (0.052768) | 0.353995 / 0.293841 (0.060154) | 0.033733 / 0.128546 (-0.094813) | 0.011489 / 0.075646 (-0.064157) | 0.323095 / 0.419271 (-0.096176) | 0.040808 / 0.043533 (-0.002725) | 0.301779 / 0.255139 (0.046640) | 0.348517 / 0.283200 (0.065318) | 0.086962 / 0.141683 (-0.054721) | 1.496270 / 1.452155 (0.044115) | 1.514260 / 1.492716 (0.021544) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.189502 / 0.018006 (0.171496) | 0.419326 / 0.000490 (0.418837) | 0.002160 / 0.000200 (0.001960) | 0.000084 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023669 / 0.037411 (-0.013742) | 0.096574 / 0.014526 (0.082048) | 0.105970 / 0.176557 (-0.070587) | 0.148531 / 0.737135 (-0.588605) | 0.109948 / 0.296338 (-0.186391) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.424968 / 0.215209 (0.209759) | 4.246292 / 2.077655 (2.168637) | 1.911062 / 1.504120 (0.406943) | 1.700733 / 1.541195 (0.159538) | 1.760756 / 1.468490 (0.292266) | 0.696966 / 4.584777 (-3.887811) | 3.372320 / 3.745712 (-0.373392) | 2.886281 / 5.269862 (-2.383581) | 1.553082 / 4.565676 (-3.012594) | 0.082835 / 0.424275 (-0.341440) | 0.012688 / 0.007607 (0.005081) | 0.536352 / 0.226044 (0.310308) | 5.382510 / 2.268929 (3.113582) | 2.365664 / 55.444624 (-53.078960) | 1.995631 / 6.876477 (-4.880845) | 2.073865 / 2.142072 (-0.068207) | 0.819109 / 4.805227 (-3.986118) | 0.150278 / 6.500664 (-6.350386) | 0.065201 / 0.075469 (-0.010268) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.239835 / 1.841788 (-0.601953) | 13.911847 / 8.074308 (5.837539) | 13.500433 / 10.191392 (3.309041) | 0.137153 / 0.680424 (-0.543271) | 0.028451 / 0.534201 (-0.505750) | 0.394659 / 0.579283 (-0.184625) | 0.404915 / 0.434364 (-0.029449) | 0.458944 / 0.540337 (-0.081394) | 0.542288 / 1.386936 (-0.844648) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006791 / 0.011353 (-0.004562) | 0.004590 / 0.011008 (-0.006419) | 0.098697 / 0.038508 (0.060189) | 0.027634 / 0.023109 (0.004525) | 0.344383 / 0.275898 (0.068485) | 0.385607 / 0.323480 (0.062127) | 0.005413 / 0.007986 (-0.002573) | 0.003447 / 0.004328 (-0.000881) | 0.077268 / 0.004250 (0.073018) | 0.041823 / 0.037052 (0.004770) | 0.342904 / 0.258489 (0.084414) | 0.399371 / 0.293841 (0.105530) | 0.032668 / 0.128546 (-0.095879) | 0.011598 / 0.075646 (-0.064048) | 0.319973 / 0.419271 (-0.099299) | 0.041760 / 0.043533 (-0.001773) | 0.340510 / 0.255139 (0.085371) | 0.377929 / 0.283200 (0.094730) | 0.090889 / 0.141683 (-0.050793) | 1.496068 / 1.452155 (0.043913) | 1.574884 / 1.492716 (0.082168) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230489 / 0.018006 (0.212483) | 0.425234 / 0.000490 (0.424745) | 0.000406 / 0.000200 (0.000206) | 0.000059 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024650 / 0.037411 (-0.012761) | 0.102706 / 0.014526 (0.088180) | 0.108017 / 0.176557 (-0.068539) | 0.143645 / 0.737135 (-0.593490) | 0.110556 / 0.296338 (-0.185782) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.468038 / 0.215209 (0.252829) | 4.670514 / 2.077655 (2.592860) | 2.446620 / 1.504120 (0.942500) | 2.241255 / 1.541195 (0.700060) | 2.286409 / 1.468490 (0.817919) | 0.698923 / 4.584777 (-3.885854) | 3.401121 / 3.745712 (-0.344592) | 1.892399 / 5.269862 (-3.377462) | 1.163101 / 4.565676 (-3.402575) | 0.082567 / 0.424275 (-0.341708) | 0.012662 / 0.007607 (0.005055) | 0.571262 / 0.226044 (0.345218) | 5.731740 / 2.268929 (3.462812) | 2.879649 / 55.444624 (-52.564975) | 2.533846 / 6.876477 (-4.342631) | 2.654789 / 2.142072 (0.512717) | 0.811345 / 4.805227 (-3.993882) | 0.152495 / 6.500664 (-6.348169) | 0.067748 / 0.075469 (-0.007721) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.267852 / 1.841788 (-0.573935) | 14.114920 / 8.074308 (6.040612) | 14.355403 / 10.191392 (4.164011) | 0.150393 / 0.680424 (-0.530031) | 0.016855 / 0.534201 (-0.517346) | 0.378710 / 0.579283 (-0.200573) | 0.385380 / 0.434364 (-0.048984) | 0.439054 / 0.540337 (-0.101284) | 0.524343 / 1.386936 (-0.862593) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png \"CML watermark\")\n" ]
2022-12-20T05:32:11Z
2023-01-13T12:30:31Z
2023-01-13T12:23:34Z
MEMBER
null
null
null
This PR adds a guide for prepping datasets for depth estimation. PR to add documentation images is up here: https://huggingface.co/datasets/huggingface/documentation-images/discussions/22
{ "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sayakpaul", "id": 22957388, "login": "sayakpaul", "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "repos_url": "https://api.github.com/users/sayakpaul/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "type": "User", "url": "https://api.github.com/users/sayakpaul", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 1, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5379/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5379/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5379.diff", "html_url": "https://github.com/huggingface/datasets/pull/5379", "merged_at": "2023-01-13T12:23:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/5379.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5379" }
https://api.github.com/repos/huggingface/datasets/issues/6815
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6815/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6815/comments
https://api.github.com/repos/huggingface/datasets/issues/6815/events
https://github.com/huggingface/datasets/pull/6815
2,246,197,070
PR_kwDODunzps5sz9eC
6,815
Remove `os.path.relpath` in `resolve_patterns`
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6815). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005101 / 0.011353 (-0.006252) | 0.003478 / 0.011008 (-0.007531) | 0.063634 / 0.038508 (0.025126) | 0.030670 / 0.023109 (0.007561) | 0.240057 / 0.275898 (-0.035841) | 0.258726 / 0.323480 (-0.064754) | 0.004136 / 0.007986 (-0.003849) | 0.002667 / 0.004328 (-0.001662) | 0.048968 / 0.004250 (0.044718) | 0.043125 / 0.037052 (0.006073) | 0.249033 / 0.258489 (-0.009456) | 0.282630 / 0.293841 (-0.011211) | 0.027528 / 0.128546 (-0.101018) | 0.009987 / 0.075646 (-0.065660) | 0.210614 / 0.419271 (-0.208657) | 0.034965 / 0.043533 (-0.008567) | 0.239199 / 0.255139 (-0.015940) | 0.276891 / 0.283200 (-0.006309) | 0.017781 / 0.141683 (-0.123902) | 1.142795 / 1.452155 (-0.309360) | 1.184171 / 1.492716 (-0.308545) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092075 / 0.018006 (0.074068) | 0.300709 / 0.000490 (0.300220) | 0.000217 / 0.000200 (0.000017) | 0.000046 / 0.000054 (-0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017887 / 0.037411 (-0.019525) | 0.061134 / 0.014526 (0.046608) | 0.077075 / 0.176557 (-0.099482) | 0.118808 / 0.737135 (-0.618327) | 0.074961 / 0.296338 (-0.221377) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.280404 / 0.215209 (0.065194) | 2.759453 / 2.077655 (0.681798) | 1.437552 / 1.504120 (-0.066568) | 1.318703 / 1.541195 (-0.222492) | 1.313075 / 1.468490 (-0.155416) | 0.564876 / 4.584777 (-4.019901) | 2.381595 / 3.745712 (-1.364118) | 2.759171 / 5.269862 (-2.510691) | 1.725878 / 4.565676 (-2.839799) | 0.062627 / 0.424275 (-0.361648) | 0.005295 / 0.007607 (-0.002312) | 0.335245 / 0.226044 (0.109201) | 3.276266 / 2.268929 (1.007337) | 1.843272 / 55.444624 (-53.601353) | 1.519948 / 6.876477 (-5.356529) | 1.519626 / 2.142072 (-0.622447) | 0.637891 / 4.805227 (-4.167336) | 0.116260 / 6.500664 (-6.384404) | 0.041768 / 0.075469 (-0.033701) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.981739 / 1.841788 (-0.860049) | 11.354768 / 8.074308 (3.280460) | 9.900585 / 10.191392 (-0.290807) | 0.130683 / 0.680424 (-0.549741) | 0.014122 / 0.534201 (-0.520079) | 0.297451 / 0.579283 (-0.281832) | 0.264786 / 0.434364 (-0.169577) | 0.337559 / 0.540337 (-0.202778) | 0.425131 / 1.386936 (-0.961805) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005182 / 0.011353 (-0.006171) | 0.003355 / 0.011008 (-0.007653) | 0.049842 / 0.038508 (0.011334) | 0.031094 / 0.023109 (0.007985) | 0.270080 / 0.275898 (-0.005818) | 0.291602 / 0.323480 (-0.031878) | 0.004210 / 0.007986 (-0.003776) | 0.002720 / 0.004328 (-0.001608) | 0.048986 / 0.004250 (0.044736) | 0.055187 / 0.037052 (0.018135) | 0.280085 / 0.258489 (0.021595) | 0.308148 / 0.293841 (0.014308) | 0.029300 / 0.128546 (-0.099246) | 0.009976 / 0.075646 (-0.065670) | 0.057930 / 0.419271 (-0.361341) | 0.032543 / 0.043533 (-0.010990) | 0.277485 / 0.255139 (0.022346) | 0.289345 / 0.283200 (0.006145) | 0.018070 / 0.141683 (-0.123613) | 1.140977 / 1.452155 (-0.311178) | 1.190543 / 1.492716 (-0.302173) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093416 / 0.018006 (0.075410) | 0.298732 / 0.000490 (0.298242) | 0.000224 / 0.000200 (0.000024) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022167 / 0.037411 (-0.015244) | 0.074970 / 0.014526 (0.060444) | 0.086047 / 0.176557 (-0.090509) | 0.125228 / 0.737135 (-0.611907) | 0.088330 / 0.296338 (-0.208008) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292016 / 0.215209 (0.076807) | 2.845712 / 2.077655 (0.768057) | 1.576951 / 1.504120 (0.072831) | 1.452298 / 1.541195 (-0.088897) | 1.456918 / 1.468490 (-0.011572) | 0.560529 / 4.584777 (-4.024248) | 2.425333 / 3.745712 (-1.320379) | 2.739416 / 5.269862 (-2.530445) | 1.715779 / 4.565676 (-2.849898) | 0.062568 / 0.424275 (-0.361707) | 0.005327 / 0.007607 (-0.002280) | 0.351376 / 0.226044 (0.125332) | 3.401855 / 2.268929 (1.132927) | 1.921844 / 55.444624 (-53.522780) | 1.648423 / 6.876477 (-5.228054) | 1.642003 / 2.142072 (-0.500069) | 0.640789 / 4.805227 (-4.164438) | 0.114699 / 6.500664 (-6.385965) | 0.040451 / 0.075469 (-0.035018) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.004186 / 1.841788 (-0.837602) | 11.879918 / 8.074308 (3.805609) | 9.981852 / 10.191392 (-0.209540) | 0.141298 / 0.680424 (-0.539126) | 0.015005 / 0.534201 (-0.519196) | 0.291537 / 0.579283 (-0.287746) | 0.272093 / 0.434364 (-0.162271) | 0.331361 / 0.540337 (-0.208977) | 0.422940 / 1.386936 (-0.963996) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ed8860faef3e751f3b77c08e09ce723a74d2c2e5 \"CML watermark\")\n" ]
2024-04-16T14:23:13Z
2024-04-16T16:06:48Z
2024-04-16T15:58:22Z
COLLABORATOR
null
null
null
... to save a few seconds when resolving repos with many data files.
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6815/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6815/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6815.diff", "html_url": "https://github.com/huggingface/datasets/pull/6815", "merged_at": "2024-04-16T15:58:22Z", "patch_url": "https://github.com/huggingface/datasets/pull/6815.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6815" }
https://api.github.com/repos/huggingface/datasets/issues/6449
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6449/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6449/comments
https://api.github.com/repos/huggingface/datasets/issues/6449/events
https://github.com/huggingface/datasets/pull/6449
2,008,617,992
PR_kwDODunzps5gQCVZ
6,449
Fix metadata file resolution when inferred pattern is `**`
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005551 / 0.011353 (-0.005802) | 0.003297 / 0.011008 (-0.007711) | 0.062524 / 0.038508 (0.024016) | 0.058467 / 0.023109 (0.035358) | 0.255703 / 0.275898 (-0.020195) | 0.281420 / 0.323480 (-0.042060) | 0.003857 / 0.007986 (-0.004129) | 0.002460 / 0.004328 (-0.001868) | 0.047762 / 0.004250 (0.043512) | 0.038757 / 0.037052 (0.001705) | 0.259937 / 0.258489 (0.001448) | 0.290050 / 0.293841 (-0.003791) | 0.028433 / 0.128546 (-0.100113) | 0.010422 / 0.075646 (-0.065224) | 0.207135 / 0.419271 (-0.212136) | 0.036004 / 0.043533 (-0.007529) | 0.268137 / 0.255139 (0.012998) | 0.275020 / 0.283200 (-0.008179) | 0.018301 / 0.141683 (-0.123382) | 1.095479 / 1.452155 (-0.356676) | 1.145452 / 1.492716 (-0.347265) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092046 / 0.018006 (0.074040) | 0.299784 / 0.000490 (0.299294) | 0.000214 / 0.000200 (0.000014) | 0.000048 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019071 / 0.037411 (-0.018340) | 0.072836 / 0.014526 (0.058310) | 0.073974 / 0.176557 (-0.102583) | 0.120903 / 0.737135 (-0.616232) | 0.075740 / 0.296338 (-0.220599) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.276365 / 0.215209 (0.061156) | 2.671217 / 2.077655 (0.593563) | 1.438862 / 1.504120 (-0.065258) | 1.327348 / 1.541195 (-0.213847) | 1.349514 / 1.468490 (-0.118976) | 0.548793 / 4.584777 (-4.035984) | 2.364458 / 3.745712 (-1.381255) | 2.716205 / 5.269862 (-2.553657) | 1.735714 / 4.565676 (-2.829963) | 0.061140 / 0.424275 (-0.363135) | 0.004926 / 0.007607 (-0.002681) | 0.330449 / 0.226044 (0.104404) | 3.255243 / 2.268929 (0.986315) | 1.824254 / 55.444624 (-53.620371) | 1.540262 / 6.876477 (-5.336215) | 1.535632 / 2.142072 (-0.606441) | 0.635224 / 4.805227 (-4.170003) | 0.116230 / 6.500664 (-6.384435) | 0.042706 / 0.075469 (-0.032763) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.948796 / 1.841788 (-0.892992) | 11.448403 / 8.074308 (3.374095) | 10.523862 / 10.191392 (0.332470) | 0.129694 / 0.680424 (-0.550730) | 0.014146 / 0.534201 (-0.520055) | 0.285706 / 0.579283 (-0.293577) | 0.262572 / 0.434364 (-0.171792) | 0.321251 / 0.540337 (-0.219087) | 0.417130 / 1.386936 (-0.969806) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005266 / 0.011353 (-0.006086) | 0.003339 / 0.011008 (-0.007670) | 0.048411 / 0.038508 (0.009903) | 0.053951 / 0.023109 (0.030842) | 0.271228 / 0.275898 (-0.004670) | 0.290066 / 0.323480 (-0.033414) | 0.004087 / 0.007986 (-0.003898) | 0.002446 / 0.004328 (-0.001882) | 0.047049 / 0.004250 (0.042798) | 0.040866 / 0.037052 (0.003813) | 0.273711 / 0.258489 (0.015222) | 0.298192 / 0.293841 (0.004351) | 0.029025 / 0.128546 (-0.099521) | 0.010479 / 0.075646 (-0.065167) | 0.056941 / 0.419271 (-0.362330) | 0.032914 / 0.043533 (-0.010619) | 0.270432 / 0.255139 (0.015293) | 0.291274 / 0.283200 (0.008074) | 0.018602 / 0.141683 (-0.123081) | 1.136707 / 1.452155 (-0.315447) | 1.184704 / 1.492716 (-0.308012) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090041 / 0.018006 (0.072035) | 0.300185 / 0.000490 (0.299696) | 0.000221 / 0.000200 (0.000022) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022074 / 0.037411 (-0.015337) | 0.070763 / 0.014526 (0.056237) | 0.082141 / 0.176557 (-0.094415) | 0.120286 / 0.737135 (-0.616850) | 0.082680 / 0.296338 (-0.213659) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292223 / 0.215209 (0.077014) | 2.856711 / 2.077655 (0.779056) | 1.581194 / 1.504120 (0.077075) | 1.496567 / 1.541195 (-0.044628) | 1.485256 / 1.468490 (0.016766) | 0.550633 / 4.584777 (-4.034144) | 2.420281 / 3.745712 (-1.325431) | 2.764373 / 5.269862 (-2.505489) | 1.735958 / 4.565676 (-2.829719) | 0.062562 / 0.424275 (-0.361714) | 0.004918 / 0.007607 (-0.002689) | 0.346038 / 0.226044 (0.119994) | 3.443478 / 2.268929 (1.174550) | 1.949366 / 55.444624 (-53.495259) | 1.686140 / 6.876477 (-5.190337) | 1.683038 / 2.142072 (-0.459034) | 0.629270 / 4.805227 (-4.175958) | 0.114947 / 6.500664 (-6.385717) | 0.040635 / 0.075469 (-0.034834) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.969746 / 1.841788 (-0.872041) | 11.922662 / 8.074308 (3.848354) | 10.441432 / 10.191392 (0.250040) | 0.128950 / 0.680424 (-0.551473) | 0.015964 / 0.534201 (-0.518237) | 0.289176 / 0.579283 (-0.290107) | 0.279203 / 0.434364 (-0.155161) | 0.323833 / 0.540337 (-0.216505) | 0.540297 / 1.386936 (-0.846639) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3ed759d0f5aea6d166caa0532aa17c209bb3af79 \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005288 / 0.011353 (-0.006065) | 0.003383 / 0.011008 (-0.007625) | 0.061926 / 0.038508 (0.023418) | 0.049080 / 0.023109 (0.025971) | 0.244852 / 0.275898 (-0.031046) | 0.263957 / 0.323480 (-0.059523) | 0.002810 / 0.007986 (-0.005175) | 0.002384 / 0.004328 (-0.001945) | 0.047807 / 0.004250 (0.043556) | 0.038374 / 0.037052 (0.001321) | 0.244414 / 0.258489 (-0.014075) | 0.272257 / 0.293841 (-0.021584) | 0.027356 / 0.128546 (-0.101190) | 0.010235 / 0.075646 (-0.065411) | 0.214896 / 0.419271 (-0.204375) | 0.035604 / 0.043533 (-0.007929) | 0.246584 / 0.255139 (-0.008555) | 0.263281 / 0.283200 (-0.019918) | 0.019689 / 0.141683 (-0.121994) | 1.114100 / 1.452155 (-0.338054) | 1.177644 / 1.492716 (-0.315073) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.088892 / 0.018006 (0.070886) | 0.298128 / 0.000490 (0.297639) | 0.000199 / 0.000200 (-0.000001) | 0.000046 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019337 / 0.037411 (-0.018075) | 0.062096 / 0.014526 (0.047570) | 0.073019 / 0.176557 (-0.103537) | 0.118801 / 0.737135 (-0.618334) | 0.074779 / 0.296338 (-0.221559) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289892 / 0.215209 (0.074683) | 2.824131 / 2.077655 (0.746476) | 1.466351 / 1.504120 (-0.037768) | 1.339528 / 1.541195 (-0.201667) | 1.369257 / 1.468490 (-0.099233) | 0.561175 / 4.584777 (-4.023602) | 2.394174 / 3.745712 (-1.351538) | 2.749668 / 5.269862 (-2.520193) | 1.747146 / 4.565676 (-2.818530) | 0.063054 / 0.424275 (-0.361221) | 0.004970 / 0.007607 (-0.002637) | 0.342985 / 0.226044 (0.116941) | 3.334894 / 2.268929 (1.065966) | 1.838459 / 55.444624 (-53.606165) | 1.579755 / 6.876477 (-5.296722) | 1.560200 / 2.142072 (-0.581872) | 0.642643 / 4.805227 (-4.162585) | 0.117741 / 6.500664 (-6.382923) | 0.042440 / 0.075469 (-0.033029) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.937476 / 1.841788 (-0.904312) | 11.403556 / 8.074308 (3.329248) | 10.317207 / 10.191392 (0.125815) | 0.145277 / 0.680424 (-0.535147) | 0.015297 / 0.534201 (-0.518904) | 0.287511 / 0.579283 (-0.291772) | 0.263516 / 0.434364 (-0.170848) | 0.320803 / 0.540337 (-0.219534) | 0.415580 / 1.386936 (-0.971356) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005239 / 0.011353 (-0.006114) | 0.003506 / 0.011008 (-0.007502) | 0.048635 / 0.038508 (0.010127) | 0.052067 / 0.023109 (0.028957) | 0.277526 / 0.275898 (0.001628) | 0.300536 / 0.323480 (-0.022944) | 0.003982 / 0.007986 (-0.004004) | 0.002413 / 0.004328 (-0.001915) | 0.046523 / 0.004250 (0.042273) | 0.039383 / 0.037052 (0.002331) | 0.281208 / 0.258489 (0.022719) | 0.306199 / 0.293841 (0.012359) | 0.028646 / 0.128546 (-0.099900) | 0.010664 / 0.075646 (-0.064982) | 0.057393 / 0.419271 (-0.361879) | 0.032171 / 0.043533 (-0.011362) | 0.277576 / 0.255139 (0.022437) | 0.296039 / 0.283200 (0.012840) | 0.017519 / 0.141683 (-0.124164) | 1.153172 / 1.452155 (-0.298982) | 1.180274 / 1.492716 (-0.312442) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.088287 / 0.018006 (0.070280) | 0.297922 / 0.000490 (0.297433) | 0.000216 / 0.000200 (0.000016) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021936 / 0.037411 (-0.015475) | 0.070181 / 0.014526 (0.055655) | 0.082068 / 0.176557 (-0.094488) | 0.119327 / 0.737135 (-0.617808) | 0.083642 / 0.296338 (-0.212697) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.299449 / 0.215209 (0.084240) | 2.914362 / 2.077655 (0.836707) | 1.611906 / 1.504120 (0.107786) | 1.488805 / 1.541195 (-0.052390) | 1.536010 / 1.468490 (0.067520) | 0.566772 / 4.584777 (-4.018004) | 2.397897 / 3.745712 (-1.347815) | 2.786048 / 5.269862 (-2.483814) | 1.745153 / 4.565676 (-2.820523) | 0.063870 / 0.424275 (-0.360405) | 0.004968 / 0.007607 (-0.002640) | 0.344455 / 0.226044 (0.118410) | 3.465772 / 2.268929 (1.196844) | 1.965761 / 55.444624 (-53.478863) | 1.687960 / 6.876477 (-5.188516) | 1.713987 / 2.142072 (-0.428085) | 0.643760 / 4.805227 (-4.161467) | 0.117623 / 6.500664 (-6.383042) | 0.041086 / 0.075469 (-0.034383) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.985129 / 1.841788 (-0.856659) | 11.986676 / 8.074308 (3.912368) | 10.493440 / 10.191392 (0.302048) | 0.130070 / 0.680424 (-0.550353) | 0.015293 / 0.534201 (-0.518908) | 0.285683 / 0.579283 (-0.293600) | 0.275656 / 0.434364 (-0.158708) | 0.328704 / 0.540337 (-0.211633) | 0.537249 / 1.386936 (-0.849687) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d7ee58f322082d3af5f11863d1f809444910827a \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005170 / 0.011353 (-0.006183) | 0.003267 / 0.011008 (-0.007741) | 0.061992 / 0.038508 (0.023484) | 0.053414 / 0.023109 (0.030305) | 0.245678 / 0.275898 (-0.030220) | 0.261320 / 0.323480 (-0.062160) | 0.003887 / 0.007986 (-0.004099) | 0.002543 / 0.004328 (-0.001786) | 0.048496 / 0.004250 (0.044246) | 0.037392 / 0.037052 (0.000340) | 0.243728 / 0.258489 (-0.014761) | 0.272524 / 0.293841 (-0.021317) | 0.027578 / 0.128546 (-0.100968) | 0.010530 / 0.075646 (-0.065116) | 0.206014 / 0.419271 (-0.213257) | 0.035987 / 0.043533 (-0.007546) | 0.243544 / 0.255139 (-0.011595) | 0.263872 / 0.283200 (-0.019327) | 0.017867 / 0.141683 (-0.123816) | 1.105159 / 1.452155 (-0.346996) | 1.186640 / 1.492716 (-0.306076) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092888 / 0.018006 (0.074882) | 0.302024 / 0.000490 (0.301534) | 0.000220 / 0.000200 (0.000020) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019329 / 0.037411 (-0.018083) | 0.062135 / 0.014526 (0.047609) | 0.075125 / 0.176557 (-0.101431) | 0.120743 / 0.737135 (-0.616393) | 0.078687 / 0.296338 (-0.217652) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279449 / 0.215209 (0.064240) | 2.727310 / 2.077655 (0.649656) | 1.442710 / 1.504120 (-0.061410) | 1.315271 / 1.541195 (-0.225923) | 1.360435 / 1.468490 (-0.108055) | 0.567720 / 4.584777 (-4.017057) | 2.397049 / 3.745712 (-1.348663) | 2.891180 / 5.269862 (-2.378682) | 1.774179 / 4.565676 (-2.791497) | 0.063155 / 0.424275 (-0.361120) | 0.004963 / 0.007607 (-0.002644) | 0.337526 / 0.226044 (0.111482) | 3.266016 / 2.268929 (0.997088) | 1.808819 / 55.444624 (-53.635806) | 1.525326 / 6.876477 (-5.351151) | 1.566937 / 2.142072 (-0.575135) | 0.654226 / 4.805227 (-4.151001) | 0.118968 / 6.500664 (-6.381696) | 0.042666 / 0.075469 (-0.032803) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.940792 / 1.841788 (-0.900996) | 11.736380 / 8.074308 (3.662072) | 10.709538 / 10.191392 (0.518146) | 0.141390 / 0.680424 (-0.539034) | 0.014204 / 0.534201 (-0.519996) | 0.284842 / 0.579283 (-0.294441) | 0.266315 / 0.434364 (-0.168049) | 0.331619 / 0.540337 (-0.208718) | 0.416446 / 1.386936 (-0.970491) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005298 / 0.011353 (-0.006055) | 0.003507 / 0.011008 (-0.007501) | 0.048315 / 0.038508 (0.009807) | 0.054855 / 0.023109 (0.031746) | 0.271558 / 0.275898 (-0.004340) | 0.316851 / 0.323480 (-0.006628) | 0.004054 / 0.007986 (-0.003932) | 0.002433 / 0.004328 (-0.001896) | 0.046442 / 0.004250 (0.042191) | 0.040853 / 0.037052 (0.003801) | 0.272537 / 0.258489 (0.014048) | 0.293736 / 0.293841 (-0.000105) | 0.029112 / 0.128546 (-0.099434) | 0.010573 / 0.075646 (-0.065074) | 0.056501 / 0.419271 (-0.362771) | 0.032541 / 0.043533 (-0.010992) | 0.271004 / 0.255139 (0.015865) | 0.289276 / 0.283200 (0.006076) | 0.018618 / 0.141683 (-0.123065) | 1.149435 / 1.452155 (-0.302719) | 1.205113 / 1.492716 (-0.287604) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094726 / 0.018006 (0.076720) | 0.304347 / 0.000490 (0.303857) | 0.000217 / 0.000200 (0.000017) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021374 / 0.037411 (-0.016037) | 0.070574 / 0.014526 (0.056049) | 0.081749 / 0.176557 (-0.094807) | 0.119829 / 0.737135 (-0.617306) | 0.082602 / 0.296338 (-0.213737) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293378 / 0.215209 (0.078169) | 2.893607 / 2.077655 (0.815952) | 1.577734 / 1.504120 (0.073614) | 1.453670 / 1.541195 (-0.087525) | 1.467354 / 1.468490 (-0.001136) | 0.563415 / 4.584777 (-4.021362) | 2.438330 / 3.745712 (-1.307382) | 2.761822 / 5.269862 (-2.508040) | 1.730944 / 4.565676 (-2.834732) | 0.062251 / 0.424275 (-0.362024) | 0.004969 / 0.007607 (-0.002638) | 0.371238 / 0.226044 (0.145194) | 3.399831 / 2.268929 (1.130903) | 1.936156 / 55.444624 (-53.508469) | 1.649716 / 6.876477 (-5.226761) | 1.669107 / 2.142072 (-0.472965) | 0.633696 / 4.805227 (-4.171531) | 0.115857 / 6.500664 (-6.384807) | 0.041012 / 0.075469 (-0.034457) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.964777 / 1.841788 (-0.877010) | 12.037613 / 8.074308 (3.963305) | 10.579241 / 10.191392 (0.387849) | 0.130932 / 0.680424 (-0.549492) | 0.015621 / 0.534201 (-0.518580) | 0.286898 / 0.579283 (-0.292385) | 0.281139 / 0.434364 (-0.153225) | 0.325240 / 0.540337 (-0.215097) | 0.554302 / 1.386936 (-0.832635) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#48d2378944a47987f96562ee856167aef1e78522 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005258 / 0.011353 (-0.006095) | 0.003863 / 0.011008 (-0.007145) | 0.064585 / 0.038508 (0.026077) | 0.058013 / 0.023109 (0.034904) | 0.249042 / 0.275898 (-0.026856) | 0.273434 / 0.323480 (-0.050046) | 0.004779 / 0.007986 (-0.003207) | 0.002550 / 0.004328 (-0.001778) | 0.048290 / 0.004250 (0.044040) | 0.038777 / 0.037052 (0.001725) | 0.253039 / 0.258489 (-0.005450) | 0.285365 / 0.293841 (-0.008476) | 0.028053 / 0.128546 (-0.100494) | 0.010521 / 0.075646 (-0.065125) | 0.210954 / 0.419271 (-0.208317) | 0.035720 / 0.043533 (-0.007813) | 0.252540 / 0.255139 (-0.002599) | 0.264786 / 0.283200 (-0.018414) | 0.018692 / 0.141683 (-0.122990) | 1.108971 / 1.452155 (-0.343183) | 1.201004 / 1.492716 (-0.291712) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095936 / 0.018006 (0.077930) | 0.302979 / 0.000490 (0.302489) | 0.000217 / 0.000200 (0.000017) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018859 / 0.037411 (-0.018552) | 0.062559 / 0.014526 (0.048034) | 0.073545 / 0.176557 (-0.103012) | 0.120780 / 0.737135 (-0.616355) | 0.074998 / 0.296338 (-0.221340) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.276728 / 0.215209 (0.061519) | 2.715310 / 2.077655 (0.637655) | 1.444927 / 1.504120 (-0.059193) | 1.323867 / 1.541195 (-0.217328) | 1.364962 / 1.468490 (-0.103528) | 0.556792 / 4.584777 (-4.027985) | 2.409151 / 3.745712 (-1.336561) | 2.811836 / 5.269862 (-2.458026) | 1.777369 / 4.565676 (-2.788308) | 0.061398 / 0.424275 (-0.362877) | 0.004924 / 0.007607 (-0.002683) | 0.341228 / 0.226044 (0.115183) | 3.369570 / 2.268929 (1.100641) | 1.858151 / 55.444624 (-53.586474) | 1.587352 / 6.876477 (-5.289125) | 1.625004 / 2.142072 (-0.517068) | 0.635317 / 4.805227 (-4.169910) | 0.117197 / 6.500664 (-6.383467) | 0.042672 / 0.075469 (-0.032797) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.940419 / 1.841788 (-0.901368) | 12.156882 / 8.074308 (4.082574) | 10.646780 / 10.191392 (0.455388) | 0.129279 / 0.680424 (-0.551144) | 0.013967 / 0.534201 (-0.520234) | 0.287956 / 0.579283 (-0.291327) | 0.265250 / 0.434364 (-0.169114) | 0.323357 / 0.540337 (-0.216980) | 0.412045 / 1.386936 (-0.974891) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005264 / 0.011353 (-0.006089) | 0.003575 / 0.011008 (-0.007433) | 0.049249 / 0.038508 (0.010741) | 0.057069 / 0.023109 (0.033959) | 0.327547 / 0.275898 (0.051649) | 0.299027 / 0.323480 (-0.024453) | 0.004768 / 0.007986 (-0.003217) | 0.002522 / 0.004328 (-0.001807) | 0.048020 / 0.004250 (0.043770) | 0.041328 / 0.037052 (0.004275) | 0.281385 / 0.258489 (0.022895) | 0.304957 / 0.293841 (0.011116) | 0.031371 / 0.128546 (-0.097175) | 0.010523 / 0.075646 (-0.065124) | 0.057073 / 0.419271 (-0.362198) | 0.032913 / 0.043533 (-0.010620) | 0.284963 / 0.255139 (0.029824) | 0.291997 / 0.283200 (0.008798) | 0.018325 / 0.141683 (-0.123357) | 1.126681 / 1.452155 (-0.325473) | 1.183011 / 1.492716 (-0.309705) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092544 / 0.018006 (0.074538) | 0.299841 / 0.000490 (0.299351) | 0.000221 / 0.000200 (0.000021) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022279 / 0.037411 (-0.015133) | 0.072515 / 0.014526 (0.057989) | 0.083068 / 0.176557 (-0.093488) | 0.120600 / 0.737135 (-0.616536) | 0.083574 / 0.296338 (-0.212765) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293393 / 0.215209 (0.078184) | 2.865420 / 2.077655 (0.787765) | 1.562419 / 1.504120 (0.058299) | 1.440846 / 1.541195 (-0.100349) | 1.471993 / 1.468490 (0.003503) | 0.572510 / 4.584777 (-4.012267) | 2.427417 / 3.745712 (-1.318295) | 2.895347 / 5.269862 (-2.374515) | 1.790578 / 4.565676 (-2.775098) | 0.064489 / 0.424275 (-0.359786) | 0.005044 / 0.007607 (-0.002564) | 0.340774 / 0.226044 (0.114730) | 3.391414 / 2.268929 (1.122486) | 1.939980 / 55.444624 (-53.504644) | 1.658514 / 6.876477 (-5.217963) | 1.741406 / 2.142072 (-0.400667) | 0.649033 / 4.805227 (-4.156194) | 0.117587 / 6.500664 (-6.383077) | 0.042042 / 0.075469 (-0.033427) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.980490 / 1.841788 (-0.861298) | 12.664045 / 8.074308 (4.589737) | 10.944437 / 10.191392 (0.753045) | 0.142059 / 0.680424 (-0.538365) | 0.015914 / 0.534201 (-0.518287) | 0.288826 / 0.579283 (-0.290457) | 0.282351 / 0.434364 (-0.152013) | 0.325302 / 0.540337 (-0.215035) | 0.416900 / 1.386936 (-0.970036) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#59750317ad258a4380ab6a6d206932b8d482ece1 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005591 / 0.011353 (-0.005762) | 0.003445 / 0.011008 (-0.007563) | 0.064290 / 0.038508 (0.025782) | 0.053046 / 0.023109 (0.029936) | 0.229101 / 0.275898 (-0.046797) | 0.255515 / 0.323480 (-0.067964) | 0.002912 / 0.007986 (-0.005073) | 0.002466 / 0.004328 (-0.001863) | 0.049348 / 0.004250 (0.045098) | 0.039492 / 0.037052 (0.002440) | 0.236301 / 0.258489 (-0.022188) | 0.270109 / 0.293841 (-0.023732) | 0.027506 / 0.128546 (-0.101040) | 0.010381 / 0.075646 (-0.065265) | 0.209999 / 0.419271 (-0.209273) | 0.035827 / 0.043533 (-0.007705) | 0.237231 / 0.255139 (-0.017908) | 0.254345 / 0.283200 (-0.028854) | 0.019689 / 0.141683 (-0.121994) | 1.096103 / 1.452155 (-0.356052) | 1.172393 / 1.492716 (-0.320323) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.101749 / 0.018006 (0.083743) | 0.310913 / 0.000490 (0.310424) | 0.000217 / 0.000200 (0.000017) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018743 / 0.037411 (-0.018669) | 0.064190 / 0.014526 (0.049664) | 0.074575 / 0.176557 (-0.101982) | 0.124143 / 0.737135 (-0.612993) | 0.077415 / 0.296338 (-0.218924) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286175 / 0.215209 (0.070965) | 2.781169 / 2.077655 (0.703515) | 1.495130 / 1.504120 (-0.008990) | 1.379136 / 1.541195 (-0.162059) | 1.397548 / 1.468490 (-0.070942) | 0.564467 / 4.584777 (-4.020310) | 2.408896 / 3.745712 (-1.336816) | 2.857771 / 5.269862 (-2.412091) | 1.776531 / 4.565676 (-2.789145) | 0.062700 / 0.424275 (-0.361575) | 0.004965 / 0.007607 (-0.002642) | 0.344026 / 0.226044 (0.117982) | 3.390829 / 2.268929 (1.121900) | 1.875258 / 55.444624 (-53.569366) | 1.602435 / 6.876477 (-5.274042) | 1.613619 / 2.142072 (-0.528454) | 0.639421 / 4.805227 (-4.165806) | 0.117697 / 6.500664 (-6.382967) | 0.042878 / 0.075469 (-0.032591) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.957694 / 1.841788 (-0.884094) | 11.888917 / 8.074308 (3.814609) | 10.643389 / 10.191392 (0.451997) | 0.143358 / 0.680424 (-0.537066) | 0.014382 / 0.534201 (-0.519819) | 0.288731 / 0.579283 (-0.290552) | 0.270040 / 0.434364 (-0.164324) | 0.323586 / 0.540337 (-0.216751) | 0.415743 / 1.386936 (-0.971193) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005228 / 0.011353 (-0.006125) | 0.003445 / 0.011008 (-0.007563) | 0.051072 / 0.038508 (0.012563) | 0.053087 / 0.023109 (0.029978) | 0.273116 / 0.275898 (-0.002782) | 0.298633 / 0.323480 (-0.024847) | 0.004067 / 0.007986 (-0.003919) | 0.002537 / 0.004328 (-0.001791) | 0.049326 / 0.004250 (0.045075) | 0.041011 / 0.037052 (0.003959) | 0.277748 / 0.258489 (0.019258) | 0.304152 / 0.293841 (0.010311) | 0.029012 / 0.128546 (-0.099534) | 0.010589 / 0.075646 (-0.065057) | 0.057564 / 0.419271 (-0.361707) | 0.032785 / 0.043533 (-0.010747) | 0.272508 / 0.255139 (0.017369) | 0.294127 / 0.283200 (0.010927) | 0.018466 / 0.141683 (-0.123217) | 1.129341 / 1.452155 (-0.322814) | 1.194631 / 1.492716 (-0.298086) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098558 / 0.018006 (0.080552) | 0.312353 / 0.000490 (0.311863) | 0.000269 / 0.000200 (0.000069) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022148 / 0.037411 (-0.015263) | 0.070601 / 0.014526 (0.056075) | 0.081780 / 0.176557 (-0.094777) | 0.121993 / 0.737135 (-0.615142) | 0.084263 / 0.296338 (-0.212076) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300501 / 0.215209 (0.085292) | 2.927534 / 2.077655 (0.849879) | 1.595527 / 1.504120 (0.091407) | 1.475607 / 1.541195 (-0.065587) | 1.496707 / 1.468490 (0.028217) | 0.559051 / 4.584777 (-4.025726) | 2.427126 / 3.745712 (-1.318586) | 2.820908 / 5.269862 (-2.448953) | 1.757492 / 4.565676 (-2.808185) | 0.062391 / 0.424275 (-0.361884) | 0.004950 / 0.007607 (-0.002657) | 0.351204 / 0.226044 (0.125160) | 3.485068 / 2.268929 (1.216139) | 1.976418 / 55.444624 (-53.468207) | 1.682715 / 6.876477 (-5.193761) | 1.703457 / 2.142072 (-0.438616) | 0.643476 / 4.805227 (-4.161751) | 0.116321 / 6.500664 (-6.384343) | 0.040776 / 0.075469 (-0.034694) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.974152 / 1.841788 (-0.867635) | 12.390170 / 8.074308 (4.315862) | 10.866283 / 10.191392 (0.674891) | 0.145049 / 0.680424 (-0.535375) | 0.016404 / 0.534201 (-0.517797) | 0.288799 / 0.579283 (-0.290484) | 0.285917 / 0.434364 (-0.148447) | 0.328455 / 0.540337 (-0.211883) | 0.417286 / 1.386936 (-0.969650) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#59750317ad258a4380ab6a6d206932b8d482ece1 \"CML watermark\")\n" ]
2023-11-23T17:35:02Z
2023-11-27T10:02:56Z
2023-11-24T17:13:02Z
COLLABORATOR
null
null
null
Refetch metadata files in case they were dropped by `filter_extensions` in the previous step. Fix #6442
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6449/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6449/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6449.diff", "html_url": "https://github.com/huggingface/datasets/pull/6449", "merged_at": "2023-11-24T17:13:02Z", "patch_url": "https://github.com/huggingface/datasets/pull/6449.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6449" }
https://api.github.com/repos/huggingface/datasets/issues/7531
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7531/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7531/comments
https://api.github.com/repos/huggingface/datasets/issues/7531/events
https://github.com/huggingface/datasets/issues/7531
3,008,914,887
I_kwDODunzps6zWGXH
7,531
Deepspeed reward training hangs at end of training with Dataset.from_list
{ "avatar_url": "https://avatars.githubusercontent.com/u/60710414?v=4", "events_url": "https://api.github.com/users/Matt00n/events{/privacy}", "followers_url": "https://api.github.com/users/Matt00n/followers", "following_url": "https://api.github.com/users/Matt00n/following{/other_user}", "gists_url": "https://api.github.com/users/Matt00n/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Matt00n", "id": 60710414, "login": "Matt00n", "node_id": "MDQ6VXNlcjYwNzEwNDE0", "organizations_url": "https://api.github.com/users/Matt00n/orgs", "received_events_url": "https://api.github.com/users/Matt00n/received_events", "repos_url": "https://api.github.com/users/Matt00n/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Matt00n/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Matt00n/subscriptions", "type": "User", "url": "https://api.github.com/users/Matt00n", "user_view_type": "public" }
[]
open
false
null
[]
null
[]
2025-04-21T17:29:20Z
2025-04-21T17:29:20Z
null
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
There seems to be a weird interaction between Deepspeed, the Dataset.from_list method and trl's RewardTrainer. On a multi-GPU setup (10 A100s), training always hangs at the very end of training until it times out. The training itself works fine until the end of training and running the same script with Deepspeed on a single GPU works without hangig. The issue persisted across a wide range of Deepspeed configs and training arguments. The issue went away when storing the exact same dataset as a JSON and using `dataset = load_dataset("json", ...)`. Here is my training script: ```python import pickle import os import random import warnings import torch from datasets import load_dataset, Dataset from transformers import AutoModelForSequenceClassification, AutoTokenizer from trl import RewardConfig, RewardTrainer, ModelConfig ####################################### Reward model ################################################# # Explicitly set arguments model_name_or_path = "Qwen/Qwen2.5-1.5B" output_dir = "Qwen2-0.5B-Reward-LoRA" per_device_train_batch_size = 2 num_train_epochs = 5 gradient_checkpointing = True learning_rate = 1.0e-4 logging_steps = 25 eval_strategy = "steps" eval_steps = 50 max_length = 2048 torch_dtype = "auto" trust_remote_code = False model_args = ModelConfig( model_name_or_path=model_name_or_path, model_revision=None, trust_remote_code=trust_remote_code, torch_dtype=torch_dtype, lora_task_type="SEQ_CLS", # Make sure task type is seq_cls ) training_args = RewardConfig( output_dir=output_dir, per_device_train_batch_size=per_device_train_batch_size, num_train_epochs=num_train_epochs, gradient_checkpointing=gradient_checkpointing, learning_rate=learning_rate, logging_steps=logging_steps, eval_strategy=eval_strategy, eval_steps=eval_steps, max_length=max_length, gradient_checkpointing_kwargs=dict(use_reentrant=False), center_rewards_coefficient = 0.01, fp16=False, bf16=True, save_strategy="no", dataloader_num_workers=0, # deepspeed="./configs/deepspeed_config.json", ) ################ # Model & Tokenizer ################ model_kwargs = dict( revision=model_args.model_revision, use_cache=False if training_args.gradient_checkpointing else True, torch_dtype=model_args.torch_dtype, ) tokenizer = AutoTokenizer.from_pretrained( model_args.model_name_or_path, use_fast=True ) model = AutoModelForSequenceClassification.from_pretrained( model_args.model_name_or_path, num_labels=1, trust_remote_code=model_args.trust_remote_code, **model_kwargs ) # Align padding tokens between tokenizer and model model.config.pad_token_id = tokenizer.pad_token_id # If post-training a base model, use ChatML as the default template if tokenizer.chat_template is None: model, tokenizer = setup_chat_format(model, tokenizer) if model_args.use_peft and model_args.lora_task_type != "SEQ_CLS": warnings.warn( "You are using a `task_type` that is different than `SEQ_CLS` for PEFT. This will lead to silent bugs" " Make sure to pass --lora_task_type SEQ_CLS when using this script with PEFT.", UserWarning, ) ############## # Load dataset ############## with open('./prefs.pkl', 'rb') as fh: loaded_data = pickle.load(fh) random.shuffle(loaded_data) dataset = [] for a_wins, a, b in loaded_data: if a_wins == 0: a, b = b, a dataset.append({'chosen': a, 'rejected': b}) dataset = Dataset.from_list(dataset) # Split the dataset into training and evaluation sets train_eval_split = dataset.train_test_split(test_size=0.15, shuffle=True, seed=42) # Access the training and evaluation datasets train_dataset = train_eval_split['train'] eval_dataset = train_eval_split['test'] ########## # Training ########## trainer = RewardTrainer( model=model, processing_class=tokenizer, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, ) trainer.train() ``` Replacing `dataset = Dataset.from_list(dataset)` with ```python with open('./prefs.json', 'w') as fh: json.dump(dataset, fh) dataset = load_dataset("json", data_files="./prefs.json", split='train') ``` resolves the issue.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7531/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7531/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/5093
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5093/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5093/comments
https://api.github.com/repos/huggingface/datasets/issues/5093/events
https://github.com/huggingface/datasets/issues/5093
1,402,939,660
I_kwDODunzps5TnykM
5,093
Mismatch between tutoriel and doc
{ "avatar_url": "https://avatars.githubusercontent.com/u/22726840?v=4", "events_url": "https://api.github.com/users/clefourrier/events{/privacy}", "followers_url": "https://api.github.com/users/clefourrier/followers", "following_url": "https://api.github.com/users/clefourrier/following{/other_user}", "gists_url": "https://api.github.com/users/clefourrier/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/clefourrier", "id": 22726840, "login": "clefourrier", "node_id": "MDQ6VXNlcjIyNzI2ODQw", "organizations_url": "https://api.github.com/users/clefourrier/orgs", "received_events_url": "https://api.github.com/users/clefourrier/received_events", "repos_url": "https://api.github.com/users/clefourrier/repos", "site_admin": false, "starred_url": "https://api.github.com/users/clefourrier/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/clefourrier/subscriptions", "type": "User", "url": "https://api.github.com/users/clefourrier", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "7057ff", "default": true, "description": "Good for newcomers", "id": 1935892877, "name": "good first issue", "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue" }, { "color": "DF8D62", "default": false, "description": "", "id": 4614514401, "name": "hacktoberfest", "node_id": "LA_kwDODunzps8AAAABEwvm4Q", "url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4", "events_url": "https://api.github.com/users/riccardobucco/events{/privacy}", "followers_url": "https://api.github.com/users/riccardobucco/followers", "following_url": "https://api.github.com/users/riccardobucco/following{/other_user}", "gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/riccardobucco", "id": 9295277, "login": "riccardobucco", "node_id": "MDQ6VXNlcjkyOTUyNzc=", "organizations_url": "https://api.github.com/users/riccardobucco/orgs", "received_events_url": "https://api.github.com/users/riccardobucco/received_events", "repos_url": "https://api.github.com/users/riccardobucco/repos", "site_admin": false, "starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions", "type": "User", "url": "https://api.github.com/users/riccardobucco", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4", "events_url": "https://api.github.com/users/riccardobucco/events{/privacy}", "followers_url": "https://api.github.com/users/riccardobucco/followers", "following_url": "https://api.github.com/users/riccardobucco/following{/other_user}", "gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/riccardobucco", "id": 9295277, "login": "riccardobucco", "node_id": "MDQ6VXNlcjkyOTUyNzc=", "organizations_url": "https://api.github.com/users/riccardobucco/orgs", "received_events_url": "https://api.github.com/users/riccardobucco/received_events", "repos_url": "https://api.github.com/users/riccardobucco/repos", "site_admin": false, "starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions", "type": "User", "url": "https://api.github.com/users/riccardobucco", "user_view_type": "public" } ]
null
[ "Hi, thanks for reporting! This line should be replaced with \r\n```python\r\ndataset = dataset.map(lambda examples: tokenizer(examples[\"text\"], return_tensors=\"np\"), batched=True)\r\n```\r\nfor it to work (the `return_tensors` part inside the `tokenizer` call).", "Can I work on this?", "Fixed in https://github.com/huggingface/datasets/pull/5095" ]
2022-10-10T10:23:53Z
2022-10-10T17:51:15Z
2022-10-10T17:51:14Z
MEMBER
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
## Describe the bug In the "Process text data" tutorial, [`map` has `return_tensors` as kwarg](https://huggingface.co/docs/datasets/main/en/nlp_process#map). It does not seem to appear in the [function documentation](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.map), nor to work. ## Steps to reproduce the bug MWE: ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") from datasets import load_dataset dataset = load_dataset("lhoestq/demo1", split="train") dataset = dataset.map(lambda examples: tokenizer(examples["review"]), batched=True, return_tensors="pt") ``` ## Expected results return_tensors to be a valid kwarg :smiley: ## Actual results ```python >> TypeError: map() got an unexpected keyword argument 'return_tensors' ``` ## Environment info - `datasets` version: 2.3.2 - Platform: Linux-5.14.0-1052-oem-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 8.0.0 - Pandas version: 1.4.3
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5093/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5093/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/5250
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5250/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5250/comments
https://api.github.com/repos/huggingface/datasets/issues/5250/events
https://github.com/huggingface/datasets/pull/5250
1,451,720,030
PR_kwDODunzps5DB-1y
5,250
Change release procedure to use only pull requests
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5250). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5250). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5250). All of your documentation changes will be reflected on that endpoint.", "Little recap:\r\n- The release-conda GH action was properly triggered by push-tag event: therefore I guess this event is also created when we publish a release and create a tag within it (as it is the case in the new procedure)\r\n - However, the package was only uploaded to huggingface channel and not to conda-forge channel\r\n - [x] Why? Need to address this.\r\n - Reply by @lhoestq: https://github.com/huggingface/datasets/pull/5250#discussion_r1025047531\r\n - We only maintain the huggingface channel\r\n - The conda-forge channel is maintained by the community; the 2.7.0 has been finally added as well to this channel \r\n- The generate-documentation GH action will be triggered by the push-to-branch event if we align the name of the release branch with the expected regex `v*-release`\r\n - [x] The naming has been aligned in the new procedure\r\n - [ ] Question: why do we have different triggering events for generate-doc and release-conda? Maybe we could set the same for both: either push-tag (when publishing the release), or push-to-branch\r\n - I think it will be better to use the push-tag event because in the new release procedure this happens later (when we publish the release), once we have already tested that everything works using the test-PyPI; on the contrary, the push-to-branch event happens before, even before opening the release PR: we could see afterwards that there is an issue, and cancel the Pull Request, but the docs and conda-package will already be published.\r\n- For the naming of the dev-version branch/PR, instead of having a complicated version naming, I'm proposing:\r\n - Using always the same branch name `dev-version`\r\n - Just include a step to delete this branch locally if it exists: `git branch -D dev-version`\r\n - The remote version will not exist because it is deleted once the PR is merged\r\n - This approach is approved by @lhoestq: https://github.com/huggingface/datasets/pull/5250#discussion_r1025048300", "Just one question to be addressed: why do we have different triggering events for generate-doc and release-conda? Maybe we could set the same for both: either push-tag (when publishing the release), or push-to-branch\r\n\r\nI think it will be better to use the push-tag event because in the new release procedure this happens later (when we publish the release), once we have already tested that everything works using the test-PyPI; on the contrary, the push-to-branch event happens before, even before opening the release PR: we could see afterwards that there is an issue, and cancel the Pull Request, but the docs and conda-package will already be published.\r\n\r\nWe could even use the release-published event instead: [8694901](https://github.com/huggingface/datasets/pull/5250/commits/86949013c9dc59a07b55fad5b78104b8a03f60cd)\r\n", "@lhoestq now that we have push-tag event for both build_documentation and release-conda, we have no constraint on the naming of the release branch:\r\n- we could name it simpler: maybe as you suggested above: https://github.com/huggingface/datasets/pull/5250#discussion_r1024119018\r\n `release-VERSION` instead of `vVERSION-release` (we do not use the prefix \"v\" anywhere in our repo)" ]
2022-11-16T14:35:32Z
2022-11-22T16:30:58Z
2022-11-22T16:27:48Z
MEMBER
null
null
null
This PR changes the release procedure so that: - it only make changes to main branch via pull requests - it is no longer necessary to directly commit/push to main branch Close #5251.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5250/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5250/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5250.diff", "html_url": "https://github.com/huggingface/datasets/pull/5250", "merged_at": "2022-11-22T16:27:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/5250.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5250" }
https://api.github.com/repos/huggingface/datasets/issues/6512
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6512/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6512/comments
https://api.github.com/repos/huggingface/datasets/issues/6512/events
https://github.com/huggingface/datasets/pull/6512
2,048,795,819
PR_kwDODunzps5iYI5z
6,512
Remove deprecated HfFolder
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6512). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005468 / 0.011353 (-0.005885) | 0.003447 / 0.011008 (-0.007561) | 0.062569 / 0.038508 (0.024061) | 0.049427 / 0.023109 (0.026318) | 0.238463 / 0.275898 (-0.037435) | 0.268320 / 0.323480 (-0.055159) | 0.002834 / 0.007986 (-0.005151) | 0.002679 / 0.004328 (-0.001649) | 0.048613 / 0.004250 (0.044363) | 0.038793 / 0.037052 (0.001741) | 0.247710 / 0.258489 (-0.010779) | 0.277557 / 0.293841 (-0.016284) | 0.027134 / 0.128546 (-0.101412) | 0.010346 / 0.075646 (-0.065301) | 0.205782 / 0.419271 (-0.213490) | 0.035549 / 0.043533 (-0.007983) | 0.241667 / 0.255139 (-0.013472) | 0.268358 / 0.283200 (-0.014842) | 0.017119 / 0.141683 (-0.124563) | 1.108487 / 1.452155 (-0.343668) | 1.177519 / 1.492716 (-0.315197) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090925 / 0.018006 (0.072919) | 0.310422 / 0.000490 (0.309932) | 0.000212 / 0.000200 (0.000012) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018912 / 0.037411 (-0.018499) | 0.061534 / 0.014526 (0.047008) | 0.073608 / 0.176557 (-0.102949) | 0.119278 / 0.737135 (-0.617858) | 0.074698 / 0.296338 (-0.221640) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287224 / 0.215209 (0.072014) | 2.792022 / 2.077655 (0.714367) | 1.474605 / 1.504120 (-0.029515) | 1.348714 / 1.541195 (-0.192481) | 1.381339 / 1.468490 (-0.087151) | 0.553033 / 4.584777 (-4.031744) | 2.360745 / 3.745712 (-1.384967) | 2.779281 / 5.269862 (-2.490580) | 1.743922 / 4.565676 (-2.821754) | 0.063817 / 0.424275 (-0.360458) | 0.004954 / 0.007607 (-0.002653) | 0.340039 / 0.226044 (0.113994) | 3.336771 / 2.268929 (1.067843) | 1.825573 / 55.444624 (-53.619051) | 1.525362 / 6.876477 (-5.351115) | 1.578793 / 2.142072 (-0.563280) | 0.638432 / 4.805227 (-4.166795) | 0.117601 / 6.500664 (-6.383063) | 0.041890 / 0.075469 (-0.033579) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.936896 / 1.841788 (-0.904892) | 11.426979 / 8.074308 (3.352671) | 10.636043 / 10.191392 (0.444651) | 0.136172 / 0.680424 (-0.544252) | 0.014249 / 0.534201 (-0.519952) | 0.287806 / 0.579283 (-0.291477) | 0.266046 / 0.434364 (-0.168318) | 0.326155 / 0.540337 (-0.214183) | 0.455508 / 1.386936 (-0.931428) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005199 / 0.011353 (-0.006154) | 0.003476 / 0.011008 (-0.007532) | 0.050519 / 0.038508 (0.012011) | 0.050732 / 0.023109 (0.027623) | 0.270140 / 0.275898 (-0.005758) | 0.295539 / 0.323480 (-0.027941) | 0.004057 / 0.007986 (-0.003928) | 0.002771 / 0.004328 (-0.001558) | 0.049157 / 0.004250 (0.044906) | 0.039863 / 0.037052 (0.002811) | 0.275934 / 0.258489 (0.017445) | 0.306971 / 0.293841 (0.013130) | 0.029405 / 0.128546 (-0.099141) | 0.010746 / 0.075646 (-0.064900) | 0.058427 / 0.419271 (-0.360845) | 0.032448 / 0.043533 (-0.011085) | 0.271851 / 0.255139 (0.016712) | 0.290337 / 0.283200 (0.007138) | 0.019145 / 0.141683 (-0.122538) | 1.112232 / 1.452155 (-0.339922) | 1.215153 / 1.492716 (-0.277564) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.088590 / 0.018006 (0.070584) | 0.299047 / 0.000490 (0.298558) | 0.000219 / 0.000200 (0.000019) | 0.000050 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022755 / 0.037411 (-0.014656) | 0.078720 / 0.014526 (0.064194) | 0.089051 / 0.176557 (-0.087505) | 0.129330 / 0.737135 (-0.607805) | 0.090645 / 0.296338 (-0.205693) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294083 / 0.215209 (0.078874) | 2.907195 / 2.077655 (0.829540) | 1.607392 / 1.504120 (0.103272) | 1.481931 / 1.541195 (-0.059263) | 1.486934 / 1.468490 (0.018444) | 0.574093 / 4.584777 (-4.010684) | 2.439775 / 3.745712 (-1.305937) | 2.739818 / 5.269862 (-2.530044) | 1.753922 / 4.565676 (-2.811755) | 0.063738 / 0.424275 (-0.360537) | 0.005219 / 0.007607 (-0.002388) | 0.350342 / 0.226044 (0.124297) | 3.463644 / 2.268929 (1.194716) | 1.971598 / 55.444624 (-53.473026) | 1.671752 / 6.876477 (-5.204724) | 1.686504 / 2.142072 (-0.455569) | 0.655870 / 4.805227 (-4.149357) | 0.117580 / 6.500664 (-6.383084) | 0.041210 / 0.075469 (-0.034259) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.996305 / 1.841788 (-0.845482) | 12.426361 / 8.074308 (4.352053) | 10.600309 / 10.191392 (0.408917) | 0.129728 / 0.680424 (-0.550695) | 0.015267 / 0.534201 (-0.518934) | 0.285444 / 0.579283 (-0.293839) | 0.272375 / 0.434364 (-0.161989) | 0.323478 / 0.540337 (-0.216860) | 0.547566 / 1.386936 (-0.839370) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a91582de288d98e94bcb5ab634ca1cfeeff544c5 \"CML watermark\")\n" ]
2023-12-19T14:40:49Z
2023-12-19T20:21:13Z
2023-12-19T20:14:30Z
MEMBER
null
null
null
...and use `huggingface_hub.get_token()` instead
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6512/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6512/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6512.diff", "html_url": "https://github.com/huggingface/datasets/pull/6512", "merged_at": "2023-12-19T20:14:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/6512.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6512" }
https://api.github.com/repos/huggingface/datasets/issues/7169
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7169/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7169/comments
https://api.github.com/repos/huggingface/datasets/issues/7169/events
https://github.com/huggingface/datasets/issues/7169
2,546,894,076
I_kwDODunzps6XzoT8
7,169
JSON lines with missing columns raise CastError
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
null
[]
2024-09-25T04:43:28Z
2024-09-26T06:42:08Z
2024-09-26T06:42:08Z
MEMBER
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
JSON lines with missing columns raise CastError: > CastError: Couldn't cast ... to ... because column names don't match Related to: - #7159 - #7161
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7169/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7169/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/6863
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6863/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6863/comments
https://api.github.com/repos/huggingface/datasets/issues/6863/events
https://github.com/huggingface/datasets/issues/6863
2,276,977,534
I_kwDODunzps6Ht-t-
6,863
Revert temporary pin huggingface-hub < 0.23.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
null
[]
2024-05-03T05:53:55Z
2024-05-27T10:14:41Z
2024-05-27T10:14:41Z
MEMBER
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Revert temporary pin huggingface-hub < 0.23.0 introduced by - #6861 once the following issue is fixed and released: - huggingface/transformers#30618
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6863/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6863/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/5771
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5771/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5771/comments
https://api.github.com/repos/huggingface/datasets/issues/5771/events
https://github.com/huggingface/datasets/issues/5771
1,674,828,380
I_kwDODunzps5j09pc
5,771
Support cloud storage for loading datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/2437102?v=4", "events_url": "https://api.github.com/users/eli-osherovich/events{/privacy}", "followers_url": "https://api.github.com/users/eli-osherovich/followers", "following_url": "https://api.github.com/users/eli-osherovich/following{/other_user}", "gists_url": "https://api.github.com/users/eli-osherovich/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/eli-osherovich", "id": 2437102, "login": "eli-osherovich", "node_id": "MDQ6VXNlcjI0MzcxMDI=", "organizations_url": "https://api.github.com/users/eli-osherovich/orgs", "received_events_url": "https://api.github.com/users/eli-osherovich/received_events", "repos_url": "https://api.github.com/users/eli-osherovich/repos", "site_admin": false, "starred_url": "https://api.github.com/users/eli-osherovich/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eli-osherovich/subscriptions", "type": "User", "url": "https://api.github.com/users/eli-osherovich", "user_view_type": "public" }
[ { "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists", "id": 1935892865, "name": "duplicate", "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate" }, { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "A duplicate of https://github.com/huggingface/datasets/issues/5281" ]
2023-04-19T12:43:53Z
2023-05-07T17:47:41Z
2023-05-07T17:47:41Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Feature request It seems that the the current implementation supports cloud storage only for `load_from_disk`. It would be nice if a similar functionality existed in `load_dataset`. ### Motivation Motivation is pretty clear -- let users work with datasets located in the cloud. ### Your contribution I can help implementing this.
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5771/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5771/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/5548
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5548/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5548/comments
https://api.github.com/repos/huggingface/datasets/issues/5548/events
https://github.com/huggingface/datasets/issues/5548
1,590,835,479
I_kwDODunzps5e0jkX
5,548
Apply flake8-comprehensions to codebase
{ "avatar_url": "https://avatars.githubusercontent.com/u/2053727?v=4", "events_url": "https://api.github.com/users/Skylion007/events{/privacy}", "followers_url": "https://api.github.com/users/Skylion007/followers", "following_url": "https://api.github.com/users/Skylion007/following{/other_user}", "gists_url": "https://api.github.com/users/Skylion007/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Skylion007", "id": 2053727, "login": "Skylion007", "node_id": "MDQ6VXNlcjIwNTM3Mjc=", "organizations_url": "https://api.github.com/users/Skylion007/orgs", "received_events_url": "https://api.github.com/users/Skylion007/received_events", "repos_url": "https://api.github.com/users/Skylion007/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Skylion007/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Skylion007/subscriptions", "type": "User", "url": "https://api.github.com/users/Skylion007", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[]
2023-02-19T20:05:38Z
2023-02-23T13:59:41Z
2023-02-23T13:59:41Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Feature request Apply ruff flake8 comprehension checks to codebase. ### Motivation This should strictly improve the performance / readability of the codebase by removing unnecessary iteration, function calls, etc. This should generate better Python bytecode which should strictly improve performance. I already applied this fixes to PyTorch and Sympy with little issue and have opened PRs to diffusers and transformers todo this as well. ### Your contribution Making a PR.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5548/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5548/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/7421
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7421/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7421/comments
https://api.github.com/repos/huggingface/datasets/issues/7421/events
https://github.com/huggingface/datasets/issues/7421
2,878,369,052
I_kwDODunzps6rkG0c
7,421
DVC integration broken
{ "avatar_url": "https://avatars.githubusercontent.com/u/34747372?v=4", "events_url": "https://api.github.com/users/maxstrobel/events{/privacy}", "followers_url": "https://api.github.com/users/maxstrobel/followers", "following_url": "https://api.github.com/users/maxstrobel/following{/other_user}", "gists_url": "https://api.github.com/users/maxstrobel/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/maxstrobel", "id": 34747372, "login": "maxstrobel", "node_id": "MDQ6VXNlcjM0NzQ3Mzcy", "organizations_url": "https://api.github.com/users/maxstrobel/orgs", "received_events_url": "https://api.github.com/users/maxstrobel/received_events", "repos_url": "https://api.github.com/users/maxstrobel/repos", "site_admin": false, "starred_url": "https://api.github.com/users/maxstrobel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/maxstrobel/subscriptions", "type": "User", "url": "https://api.github.com/users/maxstrobel", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "Unfortunately `url` is a reserved argument in `fsspec.url_to_fs`, so ideally file system implementations like DVC should use another argument name to avoid this kind of errors" ]
2025-02-25T13:14:31Z
2025-03-03T17:42:02Z
null
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug The DVC integration seems to be broken. Followed this guide: https://dvc.org/doc/user-guide/integrations/huggingface ### Steps to reproduce the bug #### Script to reproduce ~~~python from datasets import load_dataset dataset = load_dataset( "csv", data_files="dvc://workshop/satellite-data/jan_train.csv", storage_options={"url": "https://github.com/iterative/dataset-registry.git"}, ) print(dataset) ~~~ #### Error log ~~~ Traceback (most recent call last): File "C:\tmp\test\load.py", line 3, in <module> dataset = load_dataset( ^^^^^^^^^^^^^ File "C:\tmp\test\.venv\Lib\site-packages\datasets\load.py", line 2151, in load_dataset builder_instance.download_and_prepare( File "C:\tmp\test\.venv\Lib\site-packages\datasets\builder.py", line 808, in download_and_prepare fs, output_dir = url_to_fs(output_dir, **(storage_options or {})) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: url_to_fs() got multiple values for argument 'url' ~~~ ### Expected behavior Integration would work and the indicated file is downloaded and opened. ### Environment info #### Python version ~~~ python --version Python 3.11.10 ~~~ #### Venv (pip install datasets dvc): ~~~ Package Version ---------------------- ----------- aiohappyeyeballs 2.4.6 aiohttp 3.11.13 aiohttp-retry 2.9.1 aiosignal 1.3.2 amqp 5.3.1 annotated-types 0.7.0 antlr4-python3-runtime 4.9.3 appdirs 1.4.4 asyncssh 2.20.0 atpublic 5.1 attrs 25.1.0 billiard 4.2.1 celery 5.4.0 certifi 2025.1.31 cffi 1.17.1 charset-normalizer 3.4.1 click 8.1.8 click-didyoumean 0.3.1 click-plugins 1.1.1 click-repl 0.3.0 colorama 0.4.6 configobj 5.0.9 cryptography 44.0.1 datasets 3.3.2 dictdiffer 0.9.0 dill 0.3.8 diskcache 5.6.3 distro 1.9.0 dpath 2.2.0 dulwich 0.22.7 dvc 3.59.1 dvc-data 3.16.9 dvc-http 2.32.0 dvc-objects 5.1.0 dvc-render 1.0.2 dvc-studio-client 0.21.0 dvc-task 0.40.2 entrypoints 0.4 filelock 3.17.0 flatten-dict 0.4.2 flufl-lock 8.1.0 frozenlist 1.5.0 fsspec 2024.12.0 funcy 2.0 gitdb 4.0.12 gitpython 3.1.44 grandalf 0.8 gto 1.7.2 huggingface-hub 0.29.1 hydra-core 1.3.2 idna 3.10 iterative-telemetry 0.0.10 kombu 5.4.2 markdown-it-py 3.0.0 mdurl 0.1.2 multidict 6.1.0 multiprocess 0.70.16 networkx 3.4.2 numpy 2.2.3 omegaconf 2.3.0 orjson 3.10.15 packaging 24.2 pandas 2.2.3 pathspec 0.12.1 platformdirs 4.3.6 prompt-toolkit 3.0.50 propcache 0.3.0 psutil 7.0.0 pyarrow 19.0.1 pycparser 2.22 pydantic 2.10.6 pydantic-core 2.27.2 pydot 3.0.4 pygit2 1.17.0 pygments 2.19.1 pygtrie 2.5.0 pyparsing 3.2.1 python-dateutil 2.9.0.post0 pytz 2025.1 pywin32 308 pyyaml 6.0.2 requests 2.32.3 rich 13.9.4 ruamel-yaml 0.18.10 ruamel-yaml-clib 0.2.12 scmrepo 3.3.10 semver 3.0.4 setuptools 75.8.0 shellingham 1.5.4 shortuuid 1.0.13 shtab 1.7.1 six 1.17.0 smmap 5.0.2 sqltrie 0.11.2 tabulate 0.9.0 tomlkit 0.13.2 tqdm 4.67.1 typer 0.15.1 typing-extensions 4.12.2 tzdata 2025.1 urllib3 2.3.0 vine 5.1.0 voluptuous 0.15.2 wcwidth 0.2.13 xxhash 3.5.0 yarl 1.18.3 zc-lockfile 3.0.post1 ~~~
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7421/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7421/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/5120
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5120/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5120/comments
https://api.github.com/repos/huggingface/datasets/issues/5120/events
https://github.com/huggingface/datasets/pull/5120
1,410,641,221
PR_kwDODunzps5A4X10
5,120
Fix `tqdm` zip bug
{ "avatar_url": "https://avatars.githubusercontent.com/u/9879252?v=4", "events_url": "https://api.github.com/users/david1542/events{/privacy}", "followers_url": "https://api.github.com/users/david1542/followers", "following_url": "https://api.github.com/users/david1542/following{/other_user}", "gists_url": "https://api.github.com/users/david1542/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/david1542", "id": 9879252, "login": "david1542", "node_id": "MDQ6VXNlcjk4NzkyNTI=", "organizations_url": "https://api.github.com/users/david1542/orgs", "received_events_url": "https://api.github.com/users/david1542/received_events", "repos_url": "https://api.github.com/users/david1542/repos", "site_admin": false, "starred_url": "https://api.github.com/users/david1542/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/david1542/subscriptions", "type": "User", "url": "https://api.github.com/users/david1542", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "@albertvillanova Thanks for your comment. What do you think about creating 2 `pbar` for each case? I see the `pbar_iterable` is initialized differently. Maybe `pbar` can also be initialized like that.", "@albertvillanova Another solution I implemented is to change `pbar_iterable` and add the `zip` to it. I updated the PR with this solution. Let me know what you think.", "_The documentation is not available anymore as the PR was closed or merged._", "@albertvillanova Done :) Let me know what you think.", "@albertvillanova Thanks :) I also don't see an easy way to test this. This was just a problem in the way `tqdm` was used. I'm not sure we should cover it in tests.", "Hi, \r\n\r\nFirst of all, thanks for this PR. \r\nIt's the first time I join a discussion on GitHUB on problem resolution in libraries such as transformers, so I hope I comply to the best practices for an efficient communication...\r\n\r\nI am running `AutoTokenizer.from_pretrained` in a Google Colab notebook for using with BERT base. \r\nI am experiencing issue [5117](https://github.com/huggingface/datasets/issues/5117).\r\n\r\nEach time I run my notebook, I do:\r\n\r\n`! pip install transformers \r\n! pip install datasets \r\n! pip install huggingface_hub`\r\n\r\nAs I understand, the issue has been resolved and the solution merged to the released version of the code?\r\nSo I expect that the bug is resolved in my notebook, however this is not the case.\r\n\r\nDo I get something wrong? \r\nDo I have to implement some change in the source code myself?\r\n\r\nThanks in advance for your help!", "@Cochonaki Hi :) The problem was fixed but there wasn't a release since then. I believe a new release should come out in the upcoming weeks. Maybe someone from the core maintainers can answer that :)\r\n\r\ncc: @albertvillanova ", "Baby Haiti Coffee SE is born\n\nNH watch\n\nOn Sun, Oct 23, 2022 at 02:39 Dudu Lasry ***@***.***> wrote:\n\n> @Cochonaki <https://github.com/Cochonaki> Hi :) The problem was fixed but\n> there wasn't a release since then. I believe a new release should come out\n> in the upcoming weeks. Maybe someone from the core maintainers can answer\n> that :)\n>\n> cc: @albertvillanova <https://github.com/albertvillanova>\n>\n> β€”\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/pull/5120#issuecomment-1288024546>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AAB4E2NCT7QO7W3PTQGDIKDWETMQ7ANCNFSM6AAAAAARGRBY2M>\n> .\n> You are receiving this because you are subscribed to this thread.Message\n> ID: ***@***.***>\n>\n", "Hi, @Cochonaki.\r\n\r\nAs @david1542 pointed out, we have not made a release since this bug was fixed. We will make one in the following weeks.\r\n\r\nIn the meantime, if you would like to incorporate the bug fix, you can install `datasets` from this repo main branch:\r\n```shell\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```", "Thanks a lot @albertvillanova and @david1542, it works now!\r\nI am really thankful for your help, that encourages me to participate more in this community.\r\nSee you around!", "Welcome!!! πŸ€—" ]
2022-10-16T22:19:18Z
2022-10-23T10:27:53Z
2022-10-19T08:53:17Z
CONTRIBUTOR
null
null
null
This PR solves #5117, by wrapping the entire `zip` clause in tqdm. For more information, please checkout this Stack Overflow thread: https://stackoverflow.com/questions/41171191/tqdm-progressbar-and-zip-built-in-do-not-work-together
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5120/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5120/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5120.diff", "html_url": "https://github.com/huggingface/datasets/pull/5120", "merged_at": "2022-10-19T08:53:17Z", "patch_url": "https://github.com/huggingface/datasets/pull/5120.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5120" }
https://api.github.com/repos/huggingface/datasets/issues/5913
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5913/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5913/comments
https://api.github.com/repos/huggingface/datasets/issues/5913/events
https://github.com/huggingface/datasets/issues/5913
1,731,427,484
I_kwDODunzps5nM3yc
5,913
I tried to load a custom dataset using the following statement: dataset = load_dataset('json', data_files=data_files). The dataset contains 50 million text-image pairs, but an error occurred.
{ "avatar_url": "https://avatars.githubusercontent.com/u/17508662?v=4", "events_url": "https://api.github.com/users/cjt222/events{/privacy}", "followers_url": "https://api.github.com/users/cjt222/followers", "following_url": "https://api.github.com/users/cjt222/following{/other_user}", "gists_url": "https://api.github.com/users/cjt222/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cjt222", "id": 17508662, "login": "cjt222", "node_id": "MDQ6VXNlcjE3NTA4NjYy", "organizations_url": "https://api.github.com/users/cjt222/orgs", "received_events_url": "https://api.github.com/users/cjt222/received_events", "repos_url": "https://api.github.com/users/cjt222/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cjt222/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cjt222/subscriptions", "type": "User", "url": "https://api.github.com/users/cjt222", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Thanks for reporting, @cjt222.\r\n\r\nWhat is the structure of your JSON files. Please note that it is normally simpler if the data file format is JSON-Lines instead. ", "> Thanks for reporting, @cjt222.\r\n> \r\n> What is the structure of your JSON files. Please note that it is normally simpler if the data file format is JSON-Lines instead.\r\n\r\nThanks! I have encountered similar problems. I modify the json format from list to line and works!" ]
2023-05-30T02:55:26Z
2023-07-24T12:00:38Z
2023-07-24T12:00:38Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug File "/home/kas/.conda/envs/diffusers/lib/python3.7/site-packages/datasets/builder.py", line 1858, in _prepare_split_single Downloading and preparing dataset json/default to /home/kas/diffusers/examples/dreambooth/cache_data/datasets/json/default-acf423d8c6ef99d0/0.0.0/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4... Downloading data files: 0%| | 0/1 [00:00<?, ?it/s] Downloading data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:00<00:00, 84.35it/s] Extracting data files: 0%| | 0/1 [00:00<?, ?it/s] for _, table in generator: File "/home/kas/.conda/envs/diffusers/lib/python3.7/site-packages/datasets/packaged_modules/json/json.py", line 114, in _generate_tables io.BytesIO(batch), read_options=paj.ReadOptions(block_size=block_size) File "pyarrow/_json.pyx", line 258, in pyarrow._json.read_json Extracting data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:00<00:00, 27.72it/s] Generating train split: 0 examples [00:00, ? examples/s] File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 125, in pyarrow.lib.check_status pyarrow.lib.ArrowCapacityError: array cannot contain more than 2147483646 bytes, have 2390448764 ### Steps to reproduce the bug 1、data_files = ["1.json", "2.json", "3.json"] 2、dataset = load_dataset('json', data_files=data_files) ### Expected behavior Read the dataset normally. ### Environment info - `datasets` version: 2.12.0 - Platform: Linux-4.15.0-29-generic-x86_64-with-debian-buster-sid - Python version: 3.7.16 - Huggingface_hub version: 0.14.1 - PyArrow version: 12.0.0 - Pandas version: 1.3.5
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5913/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5913/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/5334
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5334/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5334/comments
https://api.github.com/repos/huggingface/datasets/issues/5334/events
https://github.com/huggingface/datasets/pull/5334
1,477,421,927
PR_kwDODunzps5EY9zN
5,334
Clean up docstrings
{ "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stevhliu", "id": 59462357, "login": "stevhliu", "node_id": "MDQ6VXNlcjU5NDYyMzU3", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "repos_url": "https://api.github.com/users/stevhliu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "type": "User", "url": "https://api.github.com/users/stevhliu", "user_view_type": "public" }
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" } ]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks ! Let us know if we can help :)\r\n\r\nSmall pref for having multiple PRs", "Awesome, thanks! Sorry this one is a little big, I'll open some smaller ones next :)" ]
2022-12-05T20:56:08Z
2022-12-09T01:44:25Z
2022-12-09T01:41:44Z
MEMBER
null
null
null
As raised by @polinaeterna in #5324, some of the docstrings are a bit of a mess because it has both Markdown and Sphinx syntax. This PR fixes the docstring for `DatasetBuilder`. I'll start working on cleaning up the rest of the docstrings and removing the old Sphinx syntax (let me know if you prefer one big PR with all the cleaned changes or multiple smaller ones)! 🧼
{ "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stevhliu", "id": 59462357, "login": "stevhliu", "node_id": "MDQ6VXNlcjU5NDYyMzU3", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "repos_url": "https://api.github.com/users/stevhliu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "type": "User", "url": "https://api.github.com/users/stevhliu", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5334/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5334/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5334.diff", "html_url": "https://github.com/huggingface/datasets/pull/5334", "merged_at": "2022-12-09T01:41:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/5334.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5334" }
https://api.github.com/repos/huggingface/datasets/issues/5529
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5529/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5529/comments
https://api.github.com/repos/huggingface/datasets/issues/5529/events
https://github.com/huggingface/datasets/pull/5529
1,582,501,233
PR_kwDODunzps5J26Fq
5,529
Fix `datasets.load_from_disk`, `DatasetDict.load_from_disk` and `Dataset.load_from_disk`
{ "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alvarobartt", "id": 36760800, "login": "alvarobartt", "node_id": "MDQ6VXNlcjM2NzYwODAw", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "repos_url": "https://api.github.com/users/alvarobartt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "type": "User", "url": "https://api.github.com/users/alvarobartt", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hmm, should this also be updated in `Dataset.load_from_disk` and `DatasetDict.load_from_disk`? https://github.com/huggingface/datasets/pull/5466 As there the paths are joined using `Path(..., ...)` and it won't work on Windows OS according to that PR, right?", "Hi, @lhoestq could you review this PR? Thank you in advance and sorry for the ping πŸ€— ", "Besides that, I was also thinking of adding a `skip_validation` boolean arg in both `Dataset.load_from_disk` and `DatasetDict.load_from_disk` to avoid duplicating those calls too when those functions are called from `datasets.load_from_disk`.\r\n\r\nSo that `skip_validation` is set to `False` by default, but passed as `True` if called from `datasets.load_from_disk`, and that just affects the file checking part of the code on both functions, do you agree @lhoestq?", "I think we should always verify", "> I think we should always verify\r\n\r\nBut with the current way we're also verifying twice right? First on `datasets.load_from_disk` then on `Dataset.load_from_disk`, right?\r\n\r\nMaybe a warning before calling either `Dataset.load_from_disk` or `DatasetDict.load_from_disk` is enough?\r\n\r\ne.g. **\"Consider using `Dataset.load_from_disk` instead to avoid `fsspec` from verifying the presence of `dataset_info.json` and `state.json` in the remote filesystem twice.\"** to be showed just when `fs` is remote.", "I don't think it's worth adding a new argument just for that. Usually we keep the set of arguments to the strict minimum", "> I don't think it's worth adding a new argument just for that. Usually we keep the set of arguments to the strict minimum\r\n\r\nWhat about the warning?\r\n\r\nAnyway, if you don't think that's worth it feel free to merge πŸ‘πŸ» ", "> What about the warning?\r\n\r\nWe may show warnings for suggestions, but only if the user does a very unoptimized thing. Here we're not at that level ^^'", "Thanks for the explanation and feedback @lhoestq πŸ€— ", "> Thank you :) Added my last suggestions:\r\n\r\nThanks for the feedback, I agree with everything besides one nit! πŸ‘πŸ» ", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011556 / 0.011353 (0.000203) | 0.006213 / 0.011008 (-0.004796) | 0.132390 / 0.038508 (0.093882) | 0.034609 / 0.023109 (0.011500) | 0.361156 / 0.275898 (0.085258) | 0.402524 / 0.323480 (0.079044) | 0.009138 / 0.007986 (0.001152) | 0.005728 / 0.004328 (0.001399) | 0.115406 / 0.004250 (0.111156) | 0.041440 / 0.037052 (0.004388) | 0.370232 / 0.258489 (0.111742) | 0.409944 / 0.293841 (0.116103) | 0.053803 / 0.128546 (-0.074744) | 0.022029 / 0.075646 (-0.053617) | 0.400325 / 0.419271 (-0.018946) | 0.055324 / 0.043533 (0.011791) | 0.368699 / 0.255139 (0.113560) | 0.391836 / 0.283200 (0.108636) | 0.099356 / 0.141683 (-0.042327) | 1.687881 / 1.452155 (0.235726) | 1.752202 / 1.492716 (0.259485) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.012992 / 0.018006 (-0.005014) | 0.518756 / 0.000490 (0.518267) | 0.004702 / 0.000200 (0.004502) | 0.000105 / 0.000054 (0.000050) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028371 / 0.037411 (-0.009041) | 0.127058 / 0.014526 (0.112532) | 0.136908 / 0.176557 (-0.039649) | 0.210168 / 0.737135 (-0.526968) | 0.139600 / 0.296338 (-0.156738) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.570901 / 0.215209 (0.355692) | 5.967213 / 2.077655 (3.889558) | 2.286745 / 1.504120 (0.782626) | 1.950682 / 1.541195 (0.409487) | 2.062536 / 1.468490 (0.594046) | 1.255671 / 4.584777 (-3.329106) | 5.454951 / 3.745712 (1.709238) | 3.076429 / 5.269862 (-2.193433) | 2.082871 / 4.565676 (-2.482806) | 0.150069 / 0.424275 (-0.274206) | 0.014864 / 0.007607 (0.007257) | 0.774672 / 0.226044 (0.548627) | 7.873992 / 2.268929 (5.605064) | 3.196165 / 55.444624 (-52.248459) | 2.366854 / 6.876477 (-4.509623) | 2.407381 / 2.142072 (0.265309) | 1.419130 / 4.805227 (-3.386097) | 0.249210 / 6.500664 (-6.251454) | 0.088648 / 0.075469 (0.013179) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.528368 / 1.841788 (-0.313420) | 17.554000 / 8.074308 (9.479692) | 20.773300 / 10.191392 (10.581908) | 0.216903 / 0.680424 (-0.463521) | 0.046544 / 0.534201 (-0.487657) | 0.538238 / 0.579283 (-0.041045) | 0.673926 / 0.434364 (0.239562) | 0.656108 / 0.540337 (0.115770) | 0.774026 / 1.386936 (-0.612910) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010177 / 0.011353 (-0.001176) | 0.006334 / 0.011008 (-0.004675) | 0.100097 / 0.038508 (0.061589) | 0.039996 / 0.023109 (0.016887) | 0.420225 / 0.275898 (0.144327) | 0.437694 / 0.323480 (0.114214) | 0.007987 / 0.007986 (0.000002) | 0.005782 / 0.004328 (0.001454) | 0.106421 / 0.004250 (0.102171) | 0.046993 / 0.037052 (0.009941) | 0.397304 / 0.258489 (0.138815) | 0.441780 / 0.293841 (0.147939) | 0.064594 / 0.128546 (-0.063952) | 0.020823 / 0.075646 (-0.054823) | 0.108854 / 0.419271 (-0.310417) | 0.076457 / 0.043533 (0.032924) | 0.401712 / 0.255139 (0.146573) | 0.459292 / 0.283200 (0.176093) | 0.125044 / 0.141683 (-0.016639) | 1.765531 / 1.452155 (0.313377) | 1.845429 / 1.492716 (0.352713) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.225549 / 0.018006 (0.207543) | 0.524402 / 0.000490 (0.523913) | 0.006994 / 0.000200 (0.006794) | 0.000120 / 0.000054 (0.000065) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033787 / 0.037411 (-0.003624) | 0.144895 / 0.014526 (0.130369) | 0.147185 / 0.176557 (-0.029371) | 0.228227 / 0.737135 (-0.508908) | 0.164967 / 0.296338 (-0.131371) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.628242 / 0.215209 (0.413033) | 6.348176 / 2.077655 (4.270522) | 2.615832 / 1.504120 (1.111712) | 2.217481 / 1.541195 (0.676286) | 2.287058 / 1.468490 (0.818568) | 1.322854 / 4.584777 (-3.261923) | 5.547831 / 3.745712 (1.802119) | 3.199467 / 5.269862 (-2.070395) | 2.135297 / 4.565676 (-2.430380) | 0.165134 / 0.424275 (-0.259141) | 0.014753 / 0.007607 (0.007146) | 0.778579 / 0.226044 (0.552535) | 7.982329 / 2.268929 (5.713401) | 3.331712 / 55.444624 (-52.112913) | 2.642606 / 6.876477 (-4.233871) | 2.699362 / 2.142072 (0.557290) | 1.572268 / 4.805227 (-3.232959) | 0.273348 / 6.500664 (-6.227316) | 0.082975 / 0.075469 (0.007506) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.730421 / 1.841788 (-0.111367) | 18.154495 / 8.074308 (10.080187) | 20.969885 / 10.191392 (10.778493) | 0.233652 / 0.680424 (-0.446772) | 0.026609 / 0.534201 (-0.507592) | 0.546874 / 0.579283 (-0.032410) | 0.602891 / 0.434364 (0.168527) | 0.641073 / 0.540337 (0.100736) | 0.772138 / 1.386936 (-0.614798) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#20703458e3c42ee7bfc1a26e47805c0db4dda2d6 \"CML watermark\")\n" ]
2023-02-13T14:54:55Z
2023-02-23T18:14:32Z
2023-02-23T18:05:26Z
MEMBER
null
null
null
## What's in this PR? After playing around a little bit with πŸ€—`datasets` in Google Cloud Storage (GCS), I found out some things that should be fixed IMO in the code: * `datasets.load_from_disk` is not checking whether `state.json` is there too when trying to load a `Dataset`, just `dataset_info.json` is checked * `DatasetDict.load_from_disk` is not checking whether `state.json` is there too when redirecting the user to load it as `datasets.load_from_disk`, just `dataset_info.json` is checked, which is misleading, as it won't be loadable that way either * `Dataset.load_from_disk` is missing the `extract_path_from_uri` call before checking in the `fs` whether `dataset_info.json` and `dataset_dict.json` exist, which when using `gcsfs` leads to 400 error code (not blocking) due to `gcsfs.retry.HttpError: Invalid bucket name: 'gs:', 400` * And, finally, the exception messages are a little bit misleading / incomplete IMO so I've tried to include all the relevant information in the messages to avoid issues when interpreting the exceptions
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5529/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5529/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5529.diff", "html_url": "https://github.com/huggingface/datasets/pull/5529", "merged_at": "2023-02-23T18:05:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/5529.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5529" }
https://api.github.com/repos/huggingface/datasets/issues/7016
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7016/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7016/comments
https://api.github.com/repos/huggingface/datasets/issues/7016/events
https://github.com/huggingface/datasets/issues/7016
2,383,262,608
I_kwDODunzps6ODbOQ
7,016
`drop_duplicates` method
{ "avatar_url": "https://avatars.githubusercontent.com/u/26205298?v=4", "events_url": "https://api.github.com/users/MohamedAliRashad/events{/privacy}", "followers_url": "https://api.github.com/users/MohamedAliRashad/followers", "following_url": "https://api.github.com/users/MohamedAliRashad/following{/other_user}", "gists_url": "https://api.github.com/users/MohamedAliRashad/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/MohamedAliRashad", "id": 26205298, "login": "MohamedAliRashad", "node_id": "MDQ6VXNlcjI2MjA1Mjk4", "organizations_url": "https://api.github.com/users/MohamedAliRashad/orgs", "received_events_url": "https://api.github.com/users/MohamedAliRashad/received_events", "repos_url": "https://api.github.com/users/MohamedAliRashad/repos", "site_admin": false, "starred_url": "https://api.github.com/users/MohamedAliRashad/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MohamedAliRashad/subscriptions", "type": "User", "url": "https://api.github.com/users/MohamedAliRashad", "user_view_type": "public" }
[ { "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists", "id": 1935892865, "name": "duplicate", "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate" }, { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "There is an open issue #2514 about this which also proposes solutions." ]
2024-07-01T09:01:06Z
2024-07-20T06:51:58Z
null
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Feature request `drop_duplicates` method for huggingface datasets (similiar in simplicity to the `pandas` one) ### Motivation Ease of use ### Your contribution I don't think i am good enough to help
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7016/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7016/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/5190
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5190/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5190/comments
https://api.github.com/repos/huggingface/datasets/issues/5190/events
https://github.com/huggingface/datasets/issues/5190
1,433,014,626
I_kwDODunzps5VahFi
5,190
`path` is `None` when downloading a custom audio dataset from the Hub
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi! Yes, this is expected behavior - we do this as a security measure to not leak local paths (this info would be useless on other users' machines anyways) and only push audio bytes. \r\n" ]
2022-11-02T11:51:25Z
2022-11-02T12:55:02Z
2022-11-02T12:55:02Z
MEMBER
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug I've created an [audio dataset](https://huggingface.co/datasets/lewtun/audio-test-push) using the `audiofolder` feature desribed in the [docs](https://huggingface.co/docs/datasets/audio_dataset#audiofolder) and then pushed it to the Hub. Locally, I can see the `audio.path` feature is of the expected form `path/to/data_dir`, but when I download the dataset from the Hub, I see `audio.path` is `None` Here's an example: ```python from datasets import load_dataset ds = load_dataset("lewtun/audio-test-push") ds["train"][0] # { # "audio": { # "path": None, <-- Is this expected? # "array": array( # [ # 3.97140226e-07, # 7.30310290e-07, # 7.56406735e-07, # ..., # -1.19636677e-01, # -1.16811886e-01, # -1.12441722e-01, # ] # ), # "sampling_rate": 44100, # }, # "song_id": 0, # "genre_id": 0, # "genre": "Electronic", # } ``` Is this expected behaviour? If yes, feel free to close this issue as it's not a true bug then :) ### Steps to reproduce the bug 1. Create an audio dataset with the `audiofolder` feature 2. Push the dataset to the Hub with `push_to_hub()` 3. Download the Hub dataset and inspect the `audio.path` feature ### Expected behavior `audio.path` points to the file associated with the audio data ### Environment info - `datasets` version: 2.6.2.dev0 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.13 - PyArrow version: 9.0.0 - Pandas version: 1.5.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5190/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5190/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/7112
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7112/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7112/comments
https://api.github.com/repos/huggingface/datasets/issues/7112/events
https://github.com/huggingface/datasets/issues/7112
2,475,004,644
I_kwDODunzps6ThZLk
7,112
cudf-cu12 24.4.1, ibis-framework 8.0.0 requires pyarrow<15.0.0a0,>=14.0.1,pyarrow<16,>=2 and datasets 2.21.0 requires pyarrow>=15.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/174590283?v=4", "events_url": "https://api.github.com/users/SoumyaMB10/events{/privacy}", "followers_url": "https://api.github.com/users/SoumyaMB10/followers", "following_url": "https://api.github.com/users/SoumyaMB10/following{/other_user}", "gists_url": "https://api.github.com/users/SoumyaMB10/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SoumyaMB10", "id": 174590283, "login": "SoumyaMB10", "node_id": "U_kgDOCmgJSw", "organizations_url": "https://api.github.com/users/SoumyaMB10/orgs", "received_events_url": "https://api.github.com/users/SoumyaMB10/received_events", "repos_url": "https://api.github.com/users/SoumyaMB10/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SoumyaMB10/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SoumyaMB10/subscriptions", "type": "User", "url": "https://api.github.com/users/SoumyaMB10", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "@sayakpaul please advice ", "Hits the same dependency conflict" ]
2024-08-20T08:13:55Z
2024-09-20T15:30:03Z
null
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug !pip install accelerate>=0.16.0 torchvision transformers>=4.25.1 datasets>=2.19.1 ftfy tensorboard Jinja2 peft==0.7.0 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. cudf-cu12 24.4.1 requires pyarrow<15.0.0a0,>=14.0.1, but you have pyarrow 17.0.0 which is incompatible. ibis-framework 8.0.0 requires pyarrow<16,>=2, but you have pyarrow 17.0.0 which is incompatible. to solve above error !pip install pyarrow==14.0.1 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. datasets 2.21.0 requires pyarrow>=15.0.0, but you have pyarrow 14.0.1 which is incompatible. ### Steps to reproduce the bug !pip install datasets>=2.19.1 ### Expected behavior run without dependency error ### Environment info Diffusers version: 0.31.0.dev0 Platform: Linux-6.1.85+-x86_64-with-glibc2.35 Running on Google Colab?: Yes Python version: 3.10.12 PyTorch version (GPU?): 2.3.1+cu121 (True) Flax version (CPU?/GPU?/TPU?): 0.8.4 (gpu) Jax version: 0.4.26 JaxLib version: 0.4.26 Huggingface_hub version: 0.23.5 Transformers version: 4.42.4 Accelerate version: 0.32.1 PEFT version: 0.7.0 Bitsandbytes version: not installed Safetensors version: 0.4.4 xFormers version: not installed Accelerator: Tesla T4, 15360 MiB Using GPU in script?: Using distributed or parallel set-up in script?:
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7112/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7112/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/4792
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4792/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4792/comments
https://api.github.com/repos/huggingface/datasets/issues/4792/events
https://github.com/huggingface/datasets/issues/4792
1,328,593,929
I_kwDODunzps5PMLwJ
4,792
Add DocVQA
{ "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/NielsRogge", "id": 48327001, "login": "NielsRogge", "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "repos_url": "https://api.github.com/users/NielsRogge/repos", "site_admin": false, "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "type": "User", "url": "https://api.github.com/users/NielsRogge", "user_view_type": "public" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
open
false
null
[]
null
[ "Thanks for proposing, @NielsRogge.\r\n\r\nPlease, note this dataset requires registering in their website and their Terms and Conditions state we cannot distribute their URL:\r\n```\r\n1. You will NOT distribute the download URLs\r\n...\r\n```" ]
2022-08-04T13:07:26Z
2022-08-08T05:31:20Z
null
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
## Adding a Dataset - **Name:** DocVQA - **Description:** Document Visual Question Answering (DocVQA) seeks to inspire a β€œpurpose-driven” point of view in Document Analysis and Recognition research, where the document content is extracted and used to respond to high-level tasks defined by the human consumers of this information. - **Paper:** https://arxiv.org/abs/2007.00398 - **Data:** https://www.docvqa.org/datasets/docvqa - **Motivation:** Models like LayoutLM and Donut in the Transformers library are fine-tuned on DocVQA. Would be very handy to directly load this dataset from the hub. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/main/ADD_NEW_DATASET.md).
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4792/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4792/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/4644
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4644/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4644/comments
https://api.github.com/repos/huggingface/datasets/issues/4644/events
https://github.com/huggingface/datasets/pull/4644
1,296,018,052
PR_kwDODunzps468mQb
4,644
[Minor fix] Typo correction
{ "avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4", "events_url": "https://api.github.com/users/cakiki/events{/privacy}", "followers_url": "https://api.github.com/users/cakiki/followers", "following_url": "https://api.github.com/users/cakiki/following{/other_user}", "gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cakiki", "id": 3664563, "login": "cakiki", "node_id": "MDQ6VXNlcjM2NjQ1NjM=", "organizations_url": "https://api.github.com/users/cakiki/orgs", "received_events_url": "https://api.github.com/users/cakiki/received_events", "repos_url": "https://api.github.com/users/cakiki/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cakiki/subscriptions", "type": "User", "url": "https://api.github.com/users/cakiki", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-07-06T15:37:02Z
2022-07-06T15:56:32Z
2022-07-06T15:45:16Z
CONTRIBUTOR
null
null
null
recieve -> receive
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4644/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4644/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4644.diff", "html_url": "https://github.com/huggingface/datasets/pull/4644", "merged_at": "2022-07-06T15:45:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/4644.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4644" }
https://api.github.com/repos/huggingface/datasets/issues/5987
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5987/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5987/comments
https://api.github.com/repos/huggingface/datasets/issues/5987/events
https://github.com/huggingface/datasets/issues/5987
1,773,047,909
I_kwDODunzps5prpBl
5,987
Why max_shard_size is not supported in load_dataset and passed to download_and_prepare
{ "avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4", "events_url": "https://api.github.com/users/npuichigo/events{/privacy}", "followers_url": "https://api.github.com/users/npuichigo/followers", "following_url": "https://api.github.com/users/npuichigo/following{/other_user}", "gists_url": "https://api.github.com/users/npuichigo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/npuichigo", "id": 11533479, "login": "npuichigo", "node_id": "MDQ6VXNlcjExNTMzNDc5", "organizations_url": "https://api.github.com/users/npuichigo/orgs", "received_events_url": "https://api.github.com/users/npuichigo/received_events", "repos_url": "https://api.github.com/users/npuichigo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/npuichigo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/npuichigo/subscriptions", "type": "User", "url": "https://api.github.com/users/npuichigo", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Can you explain your use case for `max_shard_size`? \r\n\r\nOn some systems, there is a limit to the size of a memory-mapped file, so we could consider exposing this parameter in `load_dataset`.", "In my use case, users may choose a proper size to balance the cost and benefit of using large shard size. (On azure blob or hdfs which may automatically download the shard from background)", "But `load_dataset` doesn't support caching (and reading) Arrow datasets from remote storage. \r\n\r\n`load_datset_builder` + `download_and_prepare` is not equal to `load_dataset`. The latter has one more step, `builder.as_dataset`, that memory-maps Arrow files, which only works for local files.", "Thanks. So if I want to use `IterableDataset` and control the size of single arrow file, how should I organize the data loader? Maybe `load_dataset_build` + `download_and_prepare` + `builder.as_dataset` + `dataset.to_iterable_dataset`?", "Yes, this should work.\r\n\r\nI think we can expose `max_shard_size` in `load_dataset`, so feel free to open a PR." ]
2023-06-25T04:19:13Z
2023-06-29T16:06:08Z
2023-06-29T16:06:08Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug https://github.com/huggingface/datasets/blob/a8a797cc92e860c8d0df71e0aa826f4d2690713e/src/datasets/load.py#L1809 What I can to is break the `load_dataset` and use `load_datset_builder` + `download_and_prepare` instead. ### Steps to reproduce the bug https://github.com/huggingface/datasets/blob/a8a797cc92e860c8d0df71e0aa826f4d2690713e/src/datasets/load.py#L1809 ### Expected behavior Users can define the max shard size. ### Environment info datasets==2.13.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4", "events_url": "https://api.github.com/users/npuichigo/events{/privacy}", "followers_url": "https://api.github.com/users/npuichigo/followers", "following_url": "https://api.github.com/users/npuichigo/following{/other_user}", "gists_url": "https://api.github.com/users/npuichigo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/npuichigo", "id": 11533479, "login": "npuichigo", "node_id": "MDQ6VXNlcjExNTMzNDc5", "organizations_url": "https://api.github.com/users/npuichigo/orgs", "received_events_url": "https://api.github.com/users/npuichigo/received_events", "repos_url": "https://api.github.com/users/npuichigo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/npuichigo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/npuichigo/subscriptions", "type": "User", "url": "https://api.github.com/users/npuichigo", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5987/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5987/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/6996
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6996/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6996/comments
https://api.github.com/repos/huggingface/datasets/issues/6996/events
https://github.com/huggingface/datasets/pull/6996
2,371,841,671
PR_kwDODunzps5zdAg0
6,996
Remove deprecated code
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[]
closed
false
null
[]
{ "closed_at": null, "closed_issues": 5, "created_at": "2023-02-13T16:22:42Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }, "description": "Next major release", "due_on": null, "html_url": "https://github.com/huggingface/datasets/milestone/10", "id": 9038583, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/10/labels", "node_id": "MI_kwDODunzps4Aier3", "number": 10, "open_issues": 3, "state": "open", "title": "3.0", "updated_at": "2024-08-21T09:35:06Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/10" }
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6996). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005296 / 0.011353 (-0.006057) | 0.003991 / 0.011008 (-0.007017) | 0.063892 / 0.038508 (0.025384) | 0.031185 / 0.023109 (0.008076) | 0.248300 / 0.275898 (-0.027598) | 0.270326 / 0.323480 (-0.053154) | 0.004343 / 0.007986 (-0.003643) | 0.002735 / 0.004328 (-0.001594) | 0.049751 / 0.004250 (0.045501) | 0.045629 / 0.037052 (0.008577) | 0.257584 / 0.258489 (-0.000905) | 0.284697 / 0.293841 (-0.009144) | 0.029403 / 0.128546 (-0.099143) | 0.012155 / 0.075646 (-0.063491) | 0.215241 / 0.419271 (-0.204031) | 0.036258 / 0.043533 (-0.007275) | 0.246878 / 0.255139 (-0.008261) | 0.268728 / 0.283200 (-0.014472) | 0.018113 / 0.141683 (-0.123570) | 1.130733 / 1.452155 (-0.321422) | 1.205148 / 1.492716 (-0.287568) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095196 / 0.018006 (0.077189) | 0.300741 / 0.000490 (0.300252) | 0.000220 / 0.000200 (0.000020) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018319 / 0.037411 (-0.019093) | 0.062766 / 0.014526 (0.048240) | 0.074748 / 0.176557 (-0.101809) | 0.122177 / 0.737135 (-0.614959) | 0.076652 / 0.296338 (-0.219687) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284508 / 0.215209 (0.069299) | 2.838298 / 2.077655 (0.760643) | 1.480098 / 1.504120 (-0.024022) | 1.362882 / 1.541195 (-0.178313) | 1.389036 / 1.468490 (-0.079454) | 0.747485 / 4.584777 (-3.837292) | 2.385333 / 3.745712 (-1.360379) | 2.924148 / 5.269862 (-2.345713) | 1.869061 / 4.565676 (-2.696616) | 0.079909 / 0.424275 (-0.344366) | 0.005173 / 0.007607 (-0.002434) | 0.345694 / 0.226044 (0.119650) | 3.430648 / 2.268929 (1.161719) | 1.837108 / 55.444624 (-53.607516) | 1.528498 / 6.876477 (-5.347979) | 1.567128 / 2.142072 (-0.574944) | 0.804615 / 4.805227 (-4.000612) | 0.135361 / 6.500664 (-6.365303) | 0.042195 / 0.075469 (-0.033274) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.986240 / 1.841788 (-0.855548) | 11.428084 / 8.074308 (3.353776) | 9.168227 / 10.191392 (-1.023165) | 0.131917 / 0.680424 (-0.548507) | 0.014324 / 0.534201 (-0.519877) | 0.302188 / 0.579283 (-0.277095) | 0.263790 / 0.434364 (-0.170574) | 0.343799 / 0.540337 (-0.196539) | 0.428518 / 1.386936 (-0.958418) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005734 / 0.011353 (-0.005618) | 0.003914 / 0.011008 (-0.007094) | 0.050105 / 0.038508 (0.011596) | 0.031748 / 0.023109 (0.008639) | 0.266392 / 0.275898 (-0.009506) | 0.301221 / 0.323480 (-0.022259) | 0.004408 / 0.007986 (-0.003578) | 0.002811 / 0.004328 (-0.001517) | 0.049103 / 0.004250 (0.044853) | 0.041030 / 0.037052 (0.003978) | 0.281003 / 0.258489 (0.022513) | 0.318086 / 0.293841 (0.024245) | 0.032695 / 0.128546 (-0.095852) | 0.012239 / 0.075646 (-0.063408) | 0.060387 / 0.419271 (-0.358885) | 0.034179 / 0.043533 (-0.009354) | 0.266020 / 0.255139 (0.010881) | 0.288551 / 0.283200 (0.005351) | 0.018778 / 0.141683 (-0.122905) | 1.214959 / 1.452155 (-0.237196) | 1.268269 / 1.492716 (-0.224447) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095449 / 0.018006 (0.077443) | 0.305733 / 0.000490 (0.305243) | 0.000216 / 0.000200 (0.000016) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022565 / 0.037411 (-0.014847) | 0.077266 / 0.014526 (0.062740) | 0.089345 / 0.176557 (-0.087212) | 0.128900 / 0.737135 (-0.608236) | 0.089746 / 0.296338 (-0.206593) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.298221 / 0.215209 (0.083012) | 2.957671 / 2.077655 (0.880016) | 1.584674 / 1.504120 (0.080554) | 1.456906 / 1.541195 (-0.084288) | 1.467609 / 1.468490 (-0.000881) | 0.718726 / 4.584777 (-3.866051) | 0.948157 / 3.745712 (-2.797555) | 2.953559 / 5.269862 (-2.316303) | 1.895182 / 4.565676 (-2.670494) | 0.078380 / 0.424275 (-0.345895) | 0.005640 / 0.007607 (-0.001968) | 0.352978 / 0.226044 (0.126933) | 3.436341 / 2.268929 (1.167413) | 1.962418 / 55.444624 (-53.482206) | 1.655444 / 6.876477 (-5.221033) | 1.680082 / 2.142072 (-0.461990) | 0.792920 / 4.805227 (-4.012307) | 0.133518 / 6.500664 (-6.367146) | 0.041123 / 0.075469 (-0.034346) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.022546 / 1.841788 (-0.819242) | 12.076711 / 8.074308 (4.002402) | 10.159920 / 10.191392 (-0.031472) | 0.143709 / 0.680424 (-0.536715) | 0.015499 / 0.534201 (-0.518702) | 0.302096 / 0.579283 (-0.277187) | 0.125202 / 0.434364 (-0.309162) | 0.349499 / 0.540337 (-0.190839) | 0.456019 / 1.386936 (-0.930917) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c3aaf2108e3fd77f92aed3b1dce0fd551daf9a0a \"CML watermark\")\n" ]
2024-06-25T06:54:40Z
2024-08-21T09:42:52Z
2024-08-21T09:35:06Z
MEMBER
null
null
null
Remove deprecated code, as part of the 3.0 release. First merge: - [x] #6983 - [x] #6987 - [x] #6999
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6996/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6996/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6996.diff", "html_url": "https://github.com/huggingface/datasets/pull/6996", "merged_at": "2024-08-21T09:35:06Z", "patch_url": "https://github.com/huggingface/datasets/pull/6996.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6996" }
https://api.github.com/repos/huggingface/datasets/issues/7420
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7420/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7420/comments
https://api.github.com/repos/huggingface/datasets/issues/7420/events
https://github.com/huggingface/datasets/issues/7420
2,876,281,928
I_kwDODunzps6rcJRI
7,420
better correspondence between cached and saved datasets created using from_generator
{ "avatar_url": "https://avatars.githubusercontent.com/u/12157034?v=4", "events_url": "https://api.github.com/users/vttrifonov/events{/privacy}", "followers_url": "https://api.github.com/users/vttrifonov/followers", "following_url": "https://api.github.com/users/vttrifonov/following{/other_user}", "gists_url": "https://api.github.com/users/vttrifonov/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vttrifonov", "id": 12157034, "login": "vttrifonov", "node_id": "MDQ6VXNlcjEyMTU3MDM0", "organizations_url": "https://api.github.com/users/vttrifonov/orgs", "received_events_url": "https://api.github.com/users/vttrifonov/received_events", "repos_url": "https://api.github.com/users/vttrifonov/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vttrifonov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vttrifonov/subscriptions", "type": "User", "url": "https://api.github.com/users/vttrifonov", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[]
2025-02-24T22:14:37Z
2025-02-26T03:10:22Z
null
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Feature request At the moment `.from_generator` can only create a dataset that lives in the cache. The cached dataset cannot be loaded with `load_from_disk` because the cache folder is missing `state.json`. So the only way to convert this cached dataset to a regular is to use `save_to_disk` which needs to create a copy of the cached dataset. For large datasets this can end up wasting a lot of space. In my case the saving operation failed so I am stuck with a large cached dataset and no clear way to convert to a `Dataset` that I can use. The requested feature is to provide a way to be able to load a cached dataset using `.load_from_disk`. Alternatively `.from_generator` can create the dataset at a specified location so that it can be loaded from there with `.load_from_disk`. ### Motivation I have the following workflow which has exposed some awkwardness about the Datasets saving/caching. 1. I created a cached dataset using `.from_generator` which was cached in a folder. This dataset is rather large (~600GB) with many shards. 2. I tried to save this dataset using `.save_to_disk` to another location so that I can use later as a `Dataset`. This essentially creates another copy (for a total of 1.2TB!) of what is already in the cache... In my case the saving operation keeps dying for some reason and I am stuck with a cached dataset and no copy. 3. Now I am trying to "save" the existing cached dataset but it is not clear how to access the cached files after `.from_generator` has finished e.g. from a different process. I should not be even looking at the cache but I really do not want to waste another 2hr to generate the set so that if fails agains (I already did this couple of times). - I tried `.load_from_disk` but it does not work with cached files and complains that this is not a `Dataset` (!). - I looked at `.from_file` which takes one file but the cached file has many (shards) so I am not sure how to make this work. - I tried `.load_dataset` but this seems to either try to "download" a copy (of a file which is already in the local file system!) which I will then need to save or I need to use `streaming=False` to create an `IterableDataset `which then I need to convert (using the cache) to `Dataset` so that I can save it. With both options I will end up with 3 copies of the same dataset for a total of ~2TB! I am hoping here is another way to do this... Maybe I am missing something here: I looked at docs and forums but no luck. I have a bunch of arrow files cached by `Dataset.from_generator` and no clean way to make them into a `Dataset` that I can use. This all could be so much easer if `load_from_disk` can recognize the cached files and produce a `Dataset`: after the cache is created I would not have to "save" it again and I can just load it when I need. At the moment `load_from_disk` needs `state.json` which is lacking in the cache folder. So perhaps `.from_generator` could be made to "finalize" (e.g. create `state.json`) the dataset once it is done so that it can be loaded easily. Or provide `.from_generator` with a `save_to_dir` parameter in addition to `cache_dir` which can be used for the whole process including creating the `state.json` at the end. As a proof of concept I just created `state.json` by hand and `load_from_disk` worked using the cache! So it seems to be the missing piece here. ### Your contribution Time permitting I can look into `.from_generator` to see if adding `state.json` is feasible.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7420/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7420/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6814
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6814/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6814/comments
https://api.github.com/repos/huggingface/datasets/issues/6814/events
https://github.com/huggingface/datasets/issues/6814
2,245,857,902
I_kwDODunzps6F3RJu
6,814
`map` with `num_proc` > 1 leads to OOM
{ "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bhavitvyamalik", "id": 19718818, "login": "bhavitvyamalik", "node_id": "MDQ6VXNlcjE5NzE4ODE4", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "type": "User", "url": "https://api.github.com/users/bhavitvyamalik", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "Hi ! You can try to reduce `writer_batch_size`. It corresponds to the number of samples that stay in RAM before being flushed to disk" ]
2024-04-16T11:56:03Z
2024-04-19T11:53:41Z
null
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug When running `map` on parquet dataset loaded from local machine, the RAM usage increases linearly eventually leading to OOM. I was wondering if I should I save the `cache_file` after every n steps in order to prevent this? ### Steps to reproduce the bug ``` ds = load_dataset("parquet", data_files=dataset_path, split="train") ds = ds.shard(num_shards=4, index=0) ds = ds.cast_column("audio", datasets.features.Audio(sampling_rate=16_000)) ds = ds.map(prepare_dataset, num_proc=32, writer_batch_size=1000, keep_in_memory=False, desc="preprocess dataset") ``` ``` def prepare_dataset(batch): # load audio sample = batch["audio"] inputs = feature_extractor(sample["array"], sampling_rate=16000) batch["input_values"] = inputs.input_values[0] batch["input_length"] = len(sample["array"].squeeze()) return batch ``` ### Expected behavior It shouldn't run into OOM problem. ### Environment info - `datasets` version: 2.18.0 - Platform: Linux-5.4.0-91-generic-x86_64-with-glibc2.17 - Python version: 3.8.19 - `huggingface_hub` version: 0.22.2 - PyArrow version: 15.0.2 - Pandas version: 2.0.3 - `fsspec` version: 2024.2.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6814/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6814/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6211
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6211/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6211/comments
https://api.github.com/repos/huggingface/datasets/issues/6211/events
https://github.com/huggingface/datasets/pull/6211
1,880,265,906
PR_kwDODunzps5Ze-pv
6,211
Fix empty splitinfo json
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007756 / 0.011353 (-0.003597) | 0.004733 / 0.011008 (-0.006275) | 0.095874 / 0.038508 (0.057366) | 0.081957 / 0.023109 (0.058848) | 0.426430 / 0.275898 (0.150532) | 0.457670 / 0.323480 (0.134190) | 0.004448 / 0.007986 (-0.003537) | 0.004956 / 0.004328 (0.000627) | 0.074195 / 0.004250 (0.069945) | 0.061101 / 0.037052 (0.024048) | 0.435134 / 0.258489 (0.176645) | 0.457245 / 0.293841 (0.163404) | 0.034945 / 0.128546 (-0.093601) | 0.010028 / 0.075646 (-0.065618) | 0.350724 / 0.419271 (-0.068548) | 0.064433 / 0.043533 (0.020901) | 0.417882 / 0.255139 (0.162743) | 0.445087 / 0.283200 (0.161887) | 0.027576 / 0.141683 (-0.114107) | 1.824066 / 1.452155 (0.371912) | 1.957568 / 1.492716 (0.464852) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.238568 / 0.018006 (0.220562) | 0.505289 / 0.000490 (0.504799) | 0.003527 / 0.000200 (0.003327) | 0.000120 / 0.000054 (0.000065) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032839 / 0.037411 (-0.004572) | 0.096708 / 0.014526 (0.082182) | 0.112100 / 0.176557 (-0.064456) | 0.177215 / 0.737135 (-0.559920) | 0.111273 / 0.296338 (-0.185066) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.475200 / 0.215209 (0.259991) | 4.725737 / 2.077655 (2.648082) | 2.414672 / 1.504120 (0.910552) | 2.196357 / 1.541195 (0.655162) | 2.329298 / 1.468490 (0.860808) | 0.575258 / 4.584777 (-4.009519) | 4.343630 / 3.745712 (0.597918) | 3.837665 / 5.269862 (-1.432196) | 2.497970 / 4.565676 (-2.067706) | 0.066467 / 0.424275 (-0.357808) | 0.008680 / 0.007607 (0.001073) | 0.569923 / 0.226044 (0.343878) | 5.634230 / 2.268929 (3.365302) | 2.959222 / 55.444624 (-52.485402) | 2.535954 / 6.876477 (-4.340523) | 2.804844 / 2.142072 (0.662771) | 0.682000 / 4.805227 (-4.123227) | 0.158193 / 6.500664 (-6.342471) | 0.072315 / 0.075469 (-0.003154) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.578148 / 1.841788 (-0.263639) | 22.993419 / 8.074308 (14.919110) | 16.524477 / 10.191392 (6.333085) | 0.169415 / 0.680424 (-0.511009) | 0.021520 / 0.534201 (-0.512681) | 0.455970 / 0.579283 (-0.123313) | 0.489022 / 0.434364 (0.054658) | 0.535656 / 0.540337 (-0.004682) | 0.802341 / 1.386936 (-0.584595) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008002 / 0.011353 (-0.003351) | 0.005577 / 0.011008 (-0.005431) | 0.087803 / 0.038508 (0.049295) | 0.091285 / 0.023109 (0.068176) | 0.500514 / 0.275898 (0.224616) | 0.549770 / 0.323480 (0.226290) | 0.006125 / 0.007986 (-0.001861) | 0.004031 / 0.004328 (-0.000297) | 0.077941 / 0.004250 (0.073691) | 0.071419 / 0.037052 (0.034367) | 0.497570 / 0.258489 (0.239081) | 0.542454 / 0.293841 (0.248613) | 0.040827 / 0.128546 (-0.087719) | 0.011029 / 0.075646 (-0.064617) | 0.088788 / 0.419271 (-0.330484) | 0.056970 / 0.043533 (0.013438) | 0.523934 / 0.255139 (0.268795) | 0.552507 / 0.283200 (0.269308) | 0.029794 / 0.141683 (-0.111889) | 1.817778 / 1.452155 (0.365623) | 1.955843 / 1.492716 (0.463126) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.246992 / 0.018006 (0.228986) | 0.467879 / 0.000490 (0.467390) | 0.005439 / 0.000200 (0.005239) | 0.000110 / 0.000054 (0.000056) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037774 / 0.037411 (0.000363) | 0.109332 / 0.014526 (0.094806) | 0.120103 / 0.176557 (-0.056454) | 0.185259 / 0.737135 (-0.551876) | 0.126189 / 0.296338 (-0.170149) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.492856 / 0.215209 (0.277646) | 5.033209 / 2.077655 (2.955554) | 2.885551 / 1.504120 (1.381431) | 2.480304 / 1.541195 (0.939109) | 2.579092 / 1.468490 (1.110602) | 0.557671 / 4.584777 (-4.027106) | 4.352765 / 3.745712 (0.607053) | 4.039124 / 5.269862 (-1.230738) | 2.534342 / 4.565676 (-2.031335) | 0.067267 / 0.424275 (-0.357008) | 0.008891 / 0.007607 (0.001284) | 0.591592 / 0.226044 (0.365547) | 5.939982 / 2.268929 (3.671053) | 3.258389 / 55.444624 (-52.186235) | 2.843899 / 6.876477 (-4.032578) | 3.074217 / 2.142072 (0.932144) | 0.695065 / 4.805227 (-4.110162) | 0.156917 / 6.500664 (-6.343747) | 0.070185 / 0.075469 (-0.005284) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.586716 / 1.841788 (-0.255072) | 23.405837 / 8.074308 (15.331529) | 17.200851 / 10.191392 (7.009459) | 0.170073 / 0.680424 (-0.510351) | 0.023345 / 0.534201 (-0.510856) | 0.459192 / 0.579283 (-0.120091) | 0.477419 / 0.434364 (0.043055) | 0.558581 / 0.540337 (0.018244) | 0.814373 / 1.386936 (-0.572563) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#28bbe5667e6eaa1bb21685791fcf1a4ed1ef1777 \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006050 / 0.011353 (-0.005303) | 0.003661 / 0.011008 (-0.007348) | 0.081753 / 0.038508 (0.043245) | 0.061275 / 0.023109 (0.038166) | 0.316278 / 0.275898 (0.040380) | 0.350783 / 0.323480 (0.027303) | 0.004694 / 0.007986 (-0.003291) | 0.003003 / 0.004328 (-0.001326) | 0.062877 / 0.004250 (0.058627) | 0.046985 / 0.037052 (0.009933) | 0.315698 / 0.258489 (0.057208) | 0.364607 / 0.293841 (0.070766) | 0.027365 / 0.128546 (-0.101181) | 0.008016 / 0.075646 (-0.067631) | 0.261379 / 0.419271 (-0.157893) | 0.045173 / 0.043533 (0.001640) | 0.313499 / 0.255139 (0.058360) | 0.339383 / 0.283200 (0.056184) | 0.020855 / 0.141683 (-0.120828) | 1.429851 / 1.452155 (-0.022303) | 1.506112 / 1.492716 (0.013396) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.194872 / 0.018006 (0.176866) | 0.451951 / 0.000490 (0.451462) | 0.002790 / 0.000200 (0.002590) | 0.000070 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024331 / 0.037411 (-0.013081) | 0.073156 / 0.014526 (0.058630) | 0.084054 / 0.176557 (-0.092502) | 0.145656 / 0.737135 (-0.591480) | 0.084998 / 0.296338 (-0.211340) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.391324 / 0.215209 (0.176115) | 3.898406 / 2.077655 (1.820751) | 1.891175 / 1.504120 (0.387055) | 1.698738 / 1.541195 (0.157543) | 1.774324 / 1.468490 (0.305834) | 0.495129 / 4.584777 (-4.089648) | 3.027027 / 3.745712 (-0.718685) | 2.821423 / 5.269862 (-2.448439) | 1.870761 / 4.565676 (-2.694915) | 0.057029 / 0.424275 (-0.367246) | 0.006715 / 0.007607 (-0.000892) | 0.465801 / 0.226044 (0.239757) | 4.650891 / 2.268929 (2.381962) | 2.425097 / 55.444624 (-53.019527) | 2.134731 / 6.876477 (-4.741745) | 2.312854 / 2.142072 (0.170781) | 0.589668 / 4.805227 (-4.215559) | 0.124673 / 6.500664 (-6.375991) | 0.060887 / 0.075469 (-0.014582) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.243622 / 1.841788 (-0.598166) | 18.501640 / 8.074308 (10.427332) | 13.853099 / 10.191392 (3.661707) | 0.130255 / 0.680424 (-0.550168) | 0.016824 / 0.534201 (-0.517377) | 0.332297 / 0.579283 (-0.246986) | 0.360346 / 0.434364 (-0.074018) | 0.388598 / 0.540337 (-0.151739) | 0.527551 / 1.386936 (-0.859385) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006181 / 0.011353 (-0.005172) | 0.003688 / 0.011008 (-0.007320) | 0.063395 / 0.038508 (0.024887) | 0.062531 / 0.023109 (0.039422) | 0.446565 / 0.275898 (0.170667) | 0.485224 / 0.323480 (0.161744) | 0.004982 / 0.007986 (-0.003004) | 0.002961 / 0.004328 (-0.001367) | 0.063124 / 0.004250 (0.058874) | 0.050234 / 0.037052 (0.013182) | 0.449731 / 0.258489 (0.191242) | 0.487293 / 0.293841 (0.193452) | 0.028528 / 0.128546 (-0.100018) | 0.008210 / 0.075646 (-0.067436) | 0.069520 / 0.419271 (-0.349751) | 0.041026 / 0.043533 (-0.002507) | 0.451370 / 0.255139 (0.196231) | 0.469151 / 0.283200 (0.185951) | 0.021076 / 0.141683 (-0.120607) | 1.439185 / 1.452155 (-0.012970) | 1.492634 / 1.492716 (-0.000082) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235932 / 0.018006 (0.217926) | 0.430070 / 0.000490 (0.429581) | 0.007347 / 0.000200 (0.007147) | 0.000084 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026102 / 0.037411 (-0.011309) | 0.081333 / 0.014526 (0.066807) | 0.090111 / 0.176557 (-0.086446) | 0.144578 / 0.737135 (-0.592557) | 0.091961 / 0.296338 (-0.204378) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.455761 / 0.215209 (0.240552) | 4.536345 / 2.077655 (2.458690) | 2.496833 / 1.504120 (0.992713) | 2.323325 / 1.541195 (0.782130) | 2.388364 / 1.468490 (0.919873) | 0.512010 / 4.584777 (-4.072767) | 3.106268 / 3.745712 (-0.639444) | 2.879224 / 5.269862 (-2.390637) | 1.893859 / 4.565676 (-2.671818) | 0.059131 / 0.424275 (-0.365144) | 0.006763 / 0.007607 (-0.000844) | 0.528205 / 0.226044 (0.302161) | 5.296649 / 2.268929 (3.027720) | 2.933787 / 55.444624 (-52.510838) | 2.598258 / 6.876477 (-4.278218) | 2.768195 / 2.142072 (0.626123) | 0.597430 / 4.805227 (-4.207797) | 0.125865 / 6.500664 (-6.374799) | 0.061684 / 0.075469 (-0.013785) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.341194 / 1.841788 (-0.500594) | 18.948225 / 8.074308 (10.873917) | 14.912340 / 10.191392 (4.720948) | 0.146905 / 0.680424 (-0.533519) | 0.017952 / 0.534201 (-0.516249) | 0.332299 / 0.579283 (-0.246984) | 0.362733 / 0.434364 (-0.071631) | 0.388278 / 0.540337 (-0.152060) | 0.546436 / 1.386936 (-0.840500) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#cb4f8357de001df656f2ea7af27625e189c3995b \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008314 / 0.011353 (-0.003038) | 0.004904 / 0.011008 (-0.006105) | 0.097486 / 0.038508 (0.058978) | 0.074627 / 0.023109 (0.051518) | 0.396395 / 0.275898 (0.120497) | 0.440519 / 0.323480 (0.117039) | 0.005964 / 0.007986 (-0.002022) | 0.004203 / 0.004328 (-0.000126) | 0.079998 / 0.004250 (0.075747) | 0.055158 / 0.037052 (0.018106) | 0.415439 / 0.258489 (0.156950) | 0.476101 / 0.293841 (0.182260) | 0.044761 / 0.128546 (-0.083785) | 0.013966 / 0.075646 (-0.061680) | 0.351279 / 0.419271 (-0.067993) | 0.067250 / 0.043533 (0.023717) | 0.414310 / 0.255139 (0.159171) | 0.458104 / 0.283200 (0.174904) | 0.033678 / 0.141683 (-0.108005) | 1.730539 / 1.452155 (0.278385) | 1.840013 / 1.492716 (0.347297) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.272708 / 0.018006 (0.254702) | 0.593563 / 0.000490 (0.593074) | 0.005153 / 0.000200 (0.004953) | 0.000179 / 0.000054 (0.000125) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029595 / 0.037411 (-0.007816) | 0.087994 / 0.014526 (0.073469) | 0.106066 / 0.176557 (-0.070491) | 0.180491 / 0.737135 (-0.556644) | 0.103707 / 0.296338 (-0.192631) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.566711 / 0.215209 (0.351502) | 5.589034 / 2.077655 (3.511380) | 2.364034 / 1.504120 (0.859914) | 2.119050 / 1.541195 (0.577855) | 2.103823 / 1.468490 (0.635333) | 0.819906 / 4.584777 (-3.764871) | 5.178464 / 3.745712 (1.432752) | 4.433986 / 5.269862 (-0.835875) | 2.825470 / 4.565676 (-1.740207) | 0.096907 / 0.424275 (-0.327368) | 0.008573 / 0.007607 (0.000966) | 0.677607 / 0.226044 (0.451563) | 6.811090 / 2.268929 (4.542162) | 3.140923 / 55.444624 (-52.303701) | 2.492251 / 6.876477 (-4.384225) | 2.660231 / 2.142072 (0.518158) | 0.980573 / 4.805227 (-3.824655) | 0.209028 / 6.500664 (-6.291636) | 0.079413 / 0.075469 (0.003944) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.578861 / 1.841788 (-0.262926) | 22.518269 / 8.074308 (14.443961) | 21.335916 / 10.191392 (11.144524) | 0.211311 / 0.680424 (-0.469113) | 0.033216 / 0.534201 (-0.500985) | 0.473266 / 0.579283 (-0.106017) | 0.581650 / 0.434364 (0.147286) | 0.522442 / 0.540337 (-0.017895) | 0.729039 / 1.386936 (-0.657897) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008349 / 0.011353 (-0.003003) | 0.005856 / 0.011008 (-0.005152) | 0.077855 / 0.038508 (0.039347) | 0.080608 / 0.023109 (0.057499) | 0.512533 / 0.275898 (0.236635) | 0.551862 / 0.323480 (0.228382) | 0.007004 / 0.007986 (-0.000982) | 0.004147 / 0.004328 (-0.000181) | 0.086625 / 0.004250 (0.082374) | 0.065962 / 0.037052 (0.028910) | 0.545590 / 0.258489 (0.287101) | 0.586313 / 0.293841 (0.292472) | 0.048719 / 0.128546 (-0.079827) | 0.014997 / 0.075646 (-0.060649) | 0.089510 / 0.419271 (-0.329761) | 0.060936 / 0.043533 (0.017404) | 0.498455 / 0.255139 (0.243316) | 0.535460 / 0.283200 (0.252260) | 0.034624 / 0.141683 (-0.107059) | 1.717401 / 1.452155 (0.265246) | 1.808772 / 1.492716 (0.316056) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.379504 / 0.018006 (0.361497) | 0.601756 / 0.000490 (0.601266) | 0.061740 / 0.000200 (0.061540) | 0.000497 / 0.000054 (0.000442) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031215 / 0.037411 (-0.006196) | 0.097501 / 0.014526 (0.082975) | 0.117434 / 0.176557 (-0.059122) | 0.166014 / 0.737135 (-0.571121) | 0.116466 / 0.296338 (-0.179873) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.699444 / 0.215209 (0.484235) | 6.329332 / 2.077655 (4.251678) | 3.072812 / 1.504120 (1.568693) | 2.729878 / 1.541195 (1.188683) | 2.933785 / 1.468490 (1.465295) | 0.935858 / 4.584777 (-3.648919) | 5.532532 / 3.745712 (1.786820) | 4.677139 / 5.269862 (-0.592722) | 2.963527 / 4.565676 (-1.602149) | 0.099661 / 0.424275 (-0.324614) | 0.009095 / 0.007607 (0.001488) | 0.751158 / 0.226044 (0.525114) | 7.652588 / 2.268929 (5.383660) | 3.802005 / 55.444624 (-51.642619) | 3.163126 / 6.876477 (-3.713351) | 3.401125 / 2.142072 (1.259052) | 0.998627 / 4.805227 (-3.806600) | 0.203310 / 6.500664 (-6.297354) | 0.073827 / 0.075469 (-0.001642) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.662989 / 1.841788 (-0.178799) | 23.777818 / 8.074308 (15.703510) | 20.855378 / 10.191392 (10.663986) | 0.279892 / 0.680424 (-0.400532) | 0.029303 / 0.534201 (-0.504898) | 0.473681 / 0.579283 (-0.105602) | 0.579148 / 0.434364 (0.144784) | 0.546931 / 0.540337 (0.006593) | 0.769740 / 1.386936 (-0.617196) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#63114e9cb78fe02dc145f923dec13d545a8d0327 \"CML watermark\")\n" ]
2023-09-04T13:13:53Z
2023-09-04T14:58:34Z
2023-09-04T14:47:17Z
MEMBER
null
null
null
If a split is empty, then the JSON split info should mention num_bytes = 0 and num_examples = 0. Until now they were omited because the JSON dumps ignore the fields that are equal to the default values. This is needed in datasets-server since we parse this information to the viewer
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6211/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6211/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6211.diff", "html_url": "https://github.com/huggingface/datasets/pull/6211", "merged_at": "2023-09-04T14:47:17Z", "patch_url": "https://github.com/huggingface/datasets/pull/6211.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6211" }
https://api.github.com/repos/huggingface/datasets/issues/6241
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6241/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6241/comments
https://api.github.com/repos/huggingface/datasets/issues/6241/events
https://github.com/huggingface/datasets/pull/6241
1,896,429,694
PR_kwDODunzps5aVfl-
6,241
Remove unused global variables in `audio.py`
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006753 / 0.011353 (-0.004600) | 0.004027 / 0.011008 (-0.006982) | 0.084200 / 0.038508 (0.045692) | 0.072233 / 0.023109 (0.049124) | 0.361535 / 0.275898 (0.085637) | 0.386196 / 0.323480 (0.062716) | 0.004047 / 0.007986 (-0.003939) | 0.003416 / 0.004328 (-0.000912) | 0.064724 / 0.004250 (0.060474) | 0.055740 / 0.037052 (0.018688) | 0.360422 / 0.258489 (0.101933) | 0.399230 / 0.293841 (0.105389) | 0.031537 / 0.128546 (-0.097009) | 0.008630 / 0.075646 (-0.067016) | 0.289652 / 0.419271 (-0.129620) | 0.052881 / 0.043533 (0.009348) | 0.359538 / 0.255139 (0.104399) | 0.379410 / 0.283200 (0.096211) | 0.024539 / 0.141683 (-0.117144) | 1.470891 / 1.452155 (0.018736) | 1.578879 / 1.492716 (0.086163) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.239200 / 0.018006 (0.221194) | 0.462100 / 0.000490 (0.461610) | 0.009055 / 0.000200 (0.008856) | 0.000406 / 0.000054 (0.000352) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028736 / 0.037411 (-0.008675) | 0.088051 / 0.014526 (0.073525) | 0.098101 / 0.176557 (-0.078456) | 0.152399 / 0.737135 (-0.584737) | 0.098776 / 0.296338 (-0.197563) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.401761 / 0.215209 (0.186552) | 4.014143 / 2.077655 (1.936488) | 2.033255 / 1.504120 (0.529135) | 1.855347 / 1.541195 (0.314152) | 1.996144 / 1.468490 (0.527654) | 0.488545 / 4.584777 (-4.096232) | 3.712030 / 3.745712 (-0.033682) | 3.439725 / 5.269862 (-1.830137) | 2.119289 / 4.565676 (-2.446388) | 0.057523 / 0.424275 (-0.366752) | 0.007780 / 0.007607 (0.000173) | 0.479522 / 0.226044 (0.253477) | 4.798218 / 2.268929 (2.529290) | 2.543816 / 55.444624 (-52.900809) | 2.180392 / 6.876477 (-4.696085) | 2.427195 / 2.142072 (0.285122) | 0.602071 / 4.805227 (-4.203156) | 0.133450 / 6.500664 (-6.367214) | 0.061975 / 0.075469 (-0.013494) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.250040 / 1.841788 (-0.591748) | 19.532327 / 8.074308 (11.458019) | 14.200298 / 10.191392 (4.008906) | 0.165165 / 0.680424 (-0.515259) | 0.018326 / 0.534201 (-0.515875) | 0.389788 / 0.579283 (-0.189495) | 0.419301 / 0.434364 (-0.015063) | 0.452645 / 0.540337 (-0.087693) | 0.643409 / 1.386936 (-0.743527) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007040 / 0.011353 (-0.004313) | 0.004157 / 0.011008 (-0.006851) | 0.065439 / 0.038508 (0.026931) | 0.083210 / 0.023109 (0.060101) | 0.406707 / 0.275898 (0.130809) | 0.442759 / 0.323480 (0.119279) | 0.006321 / 0.007986 (-0.001665) | 0.003684 / 0.004328 (-0.000645) | 0.064517 / 0.004250 (0.060266) | 0.060676 / 0.037052 (0.023624) | 0.413395 / 0.258489 (0.154906) | 0.446776 / 0.293841 (0.152935) | 0.032542 / 0.128546 (-0.096004) | 0.008614 / 0.075646 (-0.067033) | 0.071760 / 0.419271 (-0.347511) | 0.049646 / 0.043533 (0.006113) | 0.402409 / 0.255139 (0.147270) | 0.422775 / 0.283200 (0.139575) | 0.024846 / 0.141683 (-0.116836) | 1.522915 / 1.452155 (0.070761) | 1.566518 / 1.492716 (0.073802) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.234478 / 0.018006 (0.216472) | 0.461318 / 0.000490 (0.460828) | 0.006304 / 0.000200 (0.006105) | 0.000105 / 0.000054 (0.000051) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036904 / 0.037411 (-0.000508) | 0.102144 / 0.014526 (0.087619) | 0.108985 / 0.176557 (-0.067572) | 0.162609 / 0.737135 (-0.574526) | 0.110295 / 0.296338 (-0.186044) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.438735 / 0.215209 (0.223526) | 4.377602 / 2.077655 (2.299948) | 2.375305 / 1.504120 (0.871185) | 2.215877 / 1.541195 (0.674682) | 2.317468 / 1.468490 (0.848978) | 0.495137 / 4.584777 (-4.089640) | 3.726323 / 3.745712 (-0.019389) | 3.493785 / 5.269862 (-1.776077) | 2.177891 / 4.565676 (-2.387785) | 0.058975 / 0.424275 (-0.365300) | 0.007897 / 0.007607 (0.000290) | 0.514063 / 0.226044 (0.288019) | 5.132714 / 2.268929 (2.863786) | 2.914125 / 55.444624 (-52.530499) | 2.532912 / 6.876477 (-4.343564) | 2.776438 / 2.142072 (0.634365) | 0.624831 / 4.805227 (-4.180396) | 0.135023 / 6.500664 (-6.365641) | 0.062040 / 0.075469 (-0.013429) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.359970 / 1.841788 (-0.481818) | 20.816464 / 8.074308 (12.742156) | 16.103544 / 10.191392 (5.912152) | 0.149120 / 0.680424 (-0.531304) | 0.020279 / 0.534201 (-0.513922) | 0.408727 / 0.579283 (-0.170556) | 0.436191 / 0.434364 (0.001827) | 0.485056 / 0.540337 (-0.055281) | 0.737727 / 1.386936 (-0.649209) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d15280f435b7e27c9350a0cc37a07dbc5e2ea9ca \"CML watermark\")\n", "CI failures are unrelated", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008102 / 0.011353 (-0.003251) | 0.004886 / 0.011008 (-0.006123) | 0.090482 / 0.038508 (0.051974) | 0.071594 / 0.023109 (0.048485) | 0.428678 / 0.275898 (0.152780) | 0.442179 / 0.323480 (0.118699) | 0.004329 / 0.007986 (-0.003657) | 0.003756 / 0.004328 (-0.000573) | 0.087125 / 0.004250 (0.082874) | 0.055159 / 0.037052 (0.018107) | 0.437646 / 0.258489 (0.179157) | 0.446665 / 0.293841 (0.152824) | 0.046402 / 0.128546 (-0.082145) | 0.014248 / 0.075646 (-0.061398) | 0.331401 / 0.419271 (-0.087871) | 0.062010 / 0.043533 (0.018478) | 0.434774 / 0.255139 (0.179635) | 0.441063 / 0.283200 (0.157863) | 0.037424 / 0.141683 (-0.104258) | 1.720276 / 1.452155 (0.268121) | 1.731491 / 1.492716 (0.238775) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.302935 / 0.018006 (0.284929) | 0.590556 / 0.000490 (0.590067) | 0.014473 / 0.000200 (0.014274) | 0.000712 / 0.000054 (0.000658) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031289 / 0.037411 (-0.006122) | 0.091175 / 0.014526 (0.076649) | 0.112895 / 0.176557 (-0.063661) | 0.199558 / 0.737135 (-0.537577) | 0.113397 / 0.296338 (-0.182942) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.571586 / 0.215209 (0.356377) | 5.706894 / 2.077655 (3.629240) | 2.512701 / 1.504120 (1.008581) | 2.151705 / 1.541195 (0.610510) | 2.252738 / 1.468490 (0.784248) | 0.857524 / 4.584777 (-3.727253) | 5.189027 / 3.745712 (1.443315) | 4.464979 / 5.269862 (-0.804882) | 2.787486 / 4.565676 (-1.778190) | 0.090161 / 0.424275 (-0.334115) | 0.008649 / 0.007607 (0.001042) | 0.703367 / 0.226044 (0.477322) | 7.128971 / 2.268929 (4.860043) | 3.437475 / 55.444624 (-52.007149) | 2.562291 / 6.876477 (-4.314186) | 2.753419 / 2.142072 (0.611346) | 0.981964 / 4.805227 (-3.823263) | 0.194533 / 6.500664 (-6.306131) | 0.069659 / 0.075469 (-0.005810) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.510356 / 1.841788 (-0.331431) | 22.414117 / 8.074308 (14.339809) | 20.325418 / 10.191392 (10.134025) | 0.226823 / 0.680424 (-0.453601) | 0.029123 / 0.534201 (-0.505078) | 0.454656 / 0.579283 (-0.124627) | 0.559588 / 0.434364 (0.125224) | 0.547386 / 0.540337 (0.007048) | 0.770169 / 1.386936 (-0.616767) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010167 / 0.011353 (-0.001186) | 0.005164 / 0.011008 (-0.005844) | 0.094897 / 0.038508 (0.056388) | 0.078027 / 0.023109 (0.054918) | 0.474442 / 0.275898 (0.198544) | 0.503362 / 0.323480 (0.179882) | 0.006988 / 0.007986 (-0.000998) | 0.005369 / 0.004328 (0.001041) | 0.079547 / 0.004250 (0.075297) | 0.059382 / 0.037052 (0.022329) | 0.468759 / 0.258489 (0.210270) | 0.566780 / 0.293841 (0.272939) | 0.050791 / 0.128546 (-0.077755) | 0.013191 / 0.075646 (-0.062455) | 0.086086 / 0.419271 (-0.333186) | 0.060399 / 0.043533 (0.016866) | 0.492985 / 0.255139 (0.237846) | 0.509139 / 0.283200 (0.225940) | 0.034537 / 0.141683 (-0.107146) | 1.699166 / 1.452155 (0.247011) | 1.789781 / 1.492716 (0.297065) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.278776 / 0.018006 (0.260769) | 0.615877 / 0.000490 (0.615387) | 0.009062 / 0.000200 (0.008862) | 0.000112 / 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032931 / 0.037411 (-0.004481) | 0.094796 / 0.014526 (0.080270) | 0.126697 / 0.176557 (-0.049859) | 0.168172 / 0.737135 (-0.568963) | 0.113906 / 0.296338 (-0.182433) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.602378 / 0.215209 (0.387169) | 5.987708 / 2.077655 (3.910054) | 2.800339 / 1.504120 (1.296219) | 2.474127 / 1.541195 (0.932932) | 2.502387 / 1.468490 (1.033897) | 0.808147 / 4.584777 (-3.776630) | 5.212691 / 3.745712 (1.466979) | 4.479452 / 5.269862 (-0.790409) | 2.831960 / 4.565676 (-1.733717) | 0.086777 / 0.424275 (-0.337498) | 0.009492 / 0.007607 (0.001885) | 0.716848 / 0.226044 (0.490803) | 7.099904 / 2.268929 (4.830975) | 3.794708 / 55.444624 (-51.649916) | 2.859826 / 6.876477 (-4.016650) | 3.109673 / 2.142072 (0.967600) | 0.936776 / 4.805227 (-3.868451) | 0.195152 / 6.500664 (-6.305512) | 0.074184 / 0.075469 (-0.001285) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.585419 / 1.841788 (-0.256369) | 22.420377 / 8.074308 (14.346068) | 20.761533 / 10.191392 (10.570141) | 0.228480 / 0.680424 (-0.451943) | 0.030944 / 0.534201 (-0.503257) | 0.444717 / 0.579283 (-0.134566) | 0.579632 / 0.434364 (0.145268) | 0.521669 / 0.540337 (-0.018669) | 0.748274 / 1.386936 (-0.638662) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#94e07965a400e6901f12e6f0f25c7090656c828c \"CML watermark\")\n" ]
2023-09-14T12:06:32Z
2023-09-15T15:57:10Z
2023-09-15T15:46:07Z
COLLABORATOR
null
null
null
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6241/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6241/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6241.diff", "html_url": "https://github.com/huggingface/datasets/pull/6241", "merged_at": "2023-09-15T15:46:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/6241.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6241" }
https://api.github.com/repos/huggingface/datasets/issues/6589
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6589/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6589/comments
https://api.github.com/repos/huggingface/datasets/issues/6589/events
https://github.com/huggingface/datasets/issues/6589
2,081,358,619
I_kwDODunzps58DwMb
6,589
After `2.16.0` version, there are `PermissionError` when users use shared cache_dir
{ "avatar_url": "https://avatars.githubusercontent.com/u/106717516?v=4", "events_url": "https://api.github.com/users/minhopark-neubla/events{/privacy}", "followers_url": "https://api.github.com/users/minhopark-neubla/followers", "following_url": "https://api.github.com/users/minhopark-neubla/following{/other_user}", "gists_url": "https://api.github.com/users/minhopark-neubla/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/minhopark-neubla", "id": 106717516, "login": "minhopark-neubla", "node_id": "U_kgDOBlxhTA", "organizations_url": "https://api.github.com/users/minhopark-neubla/orgs", "received_events_url": "https://api.github.com/users/minhopark-neubla/received_events", "repos_url": "https://api.github.com/users/minhopark-neubla/repos", "site_admin": false, "starred_url": "https://api.github.com/users/minhopark-neubla/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/minhopark-neubla/subscriptions", "type": "User", "url": "https://api.github.com/users/minhopark-neubla", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "We'll do a new release of `datasets` in the coming days with a fix !", "@lhoestq Thank you very much!" ]
2024-01-15T06:46:27Z
2024-02-02T07:55:38Z
2024-01-30T15:28:38Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug - We use shared `cache_dir` using `HF_HOME="{shared_directory}"` - After dataset version 2.16.0, datasets uses `filelock` package for file locking #6445 - But, `filelock` package make `.lock` file with `644` permission - Dataset is not available to other users except the user who created the lock file via `load_dataset`. ### Steps to reproduce the bug 1. `pip install datasets==2.16.0` 2. `export HF_HOME="{shared_directory}"` 3. download dataset with `load_dataset` 4. logout and login another user 5. `pip install datasets==2.16.0` 6. `export HF_HOME="{shared_directory}"` 7. download dataset with `load_dataset` 8. `PermissionError` occurs ### Expected behavior - Users can share `cache_dir` using environment variable `HF_HOME` ### Environment info - python == 3.9.10 - datasets == 2.16.0 - ubuntu 22.04 - shared_directory has ACL ![image (1)](https://github.com/huggingface/datasets/assets/106717516/5ca759db-ad0c-4883-9a97-9c8fccd00d8a) - users are same group (developers)
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6589/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6589/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/7097
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7097/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7097/comments
https://api.github.com/repos/huggingface/datasets/issues/7097/events
https://github.com/huggingface/datasets/issues/7097
2,458,455,489
I_kwDODunzps6SiQ3B
7,097
Some of DownloadConfig's properties are always being overridden in load.py
{ "avatar_url": "https://avatars.githubusercontent.com/u/29772899?v=4", "events_url": "https://api.github.com/users/ductai199x/events{/privacy}", "followers_url": "https://api.github.com/users/ductai199x/followers", "following_url": "https://api.github.com/users/ductai199x/following{/other_user}", "gists_url": "https://api.github.com/users/ductai199x/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ductai199x", "id": 29772899, "login": "ductai199x", "node_id": "MDQ6VXNlcjI5NzcyODk5", "organizations_url": "https://api.github.com/users/ductai199x/orgs", "received_events_url": "https://api.github.com/users/ductai199x/received_events", "repos_url": "https://api.github.com/users/ductai199x/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ductai199x/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ductai199x/subscriptions", "type": "User", "url": "https://api.github.com/users/ductai199x", "user_view_type": "public" }
[]
open
false
null
[]
null
[]
2024-08-09T18:26:37Z
2024-08-09T18:26:37Z
null
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug The `extract_compressed_file` and `force_extract` properties of DownloadConfig are always being set to True in the function `dataset_module_factory` in the `load.py` file. This behavior is very annoying because data extracted will just be ignored the next time the dataset is loaded. See this image below: ![image](https://github.com/user-attachments/assets/9e76ebb7-09b1-4c95-adc8-a959b536f93c) ### Steps to reproduce the bug 1. Have a local dataset that contains archived files (zip, tar.gz, etc) 2. Build a dataset loading script to download and extract these files 3. Run the load_dataset function with a DownloadConfig that specifically set `force_extract` to False 4. The extraction process will start no matter if the archives was extracted previously ### Expected behavior The extraction process should not run when the archives were previously extracted and `force_extract` is set to False. ### Environment info datasets==2.20.0 python3.9
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7097/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7097/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/5371
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5371/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5371/comments
https://api.github.com/repos/huggingface/datasets/issues/5371/events
https://github.com/huggingface/datasets/issues/5371
1,501,369,036
I_kwDODunzps5ZfRLM
5,371
Add a robustness benchmark dataset for vision
{ "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sayakpaul", "id": 22957388, "login": "sayakpaul", "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "repos_url": "https://api.github.com/users/sayakpaul/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "type": "User", "url": "https://api.github.com/users/sayakpaul", "user_view_type": "public" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
open
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sayakpaul", "id": 22957388, "login": "sayakpaul", "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "repos_url": "https://api.github.com/users/sayakpaul/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "type": "User", "url": "https://api.github.com/users/sayakpaul", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sayakpaul", "id": 22957388, "login": "sayakpaul", "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "repos_url": "https://api.github.com/users/sayakpaul/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "type": "User", "url": "https://api.github.com/users/sayakpaul", "user_view_type": "public" } ]
null
[ "Ccing @nazneenrajani @lvwerra @osanseviero " ]
2022-12-17T12:35:13Z
2022-12-20T06:21:41Z
null
MEMBER
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Name ImageNet-C ### Paper Benchmarking Neural Network Robustness to Common Corruptions and Perturbations ### Data https://github.com/hendrycks/robustness ### Motivation It's a known fact that vision models are brittle when they meet with slightly corrupted and perturbed data. This is also correlated to the robustness aspects of vision models. Researchers use different benchmark datasets to evaluate the robustness aspects of vision models. ImageNet-C is one of them. Having this dataset in πŸ€— Datasets would allow researchers to evaluate and study the robustness aspects of vision models. Since the metric associated with these evaluations is top-1 accuracy, researchers should be able to easily take advantage of the evaluation benchmarks on the Hub and perform comprehensive reporting. ImageNet-C is a large dataset. Once it's in, it can act as a reference and we can also reach out to the authors of the other robustness benchmark datasets in vision, such as ObjectNet, WILDS, Metashift, etc. These datasets cater to different aspects. For example, ObjectNet is related to assessing how well a model performs under sub-population shifts. Related thread: https://huggingface.slack.com/archives/C036H4A5U8Z/p1669994598060499
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 2, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/5371/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5371/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6930
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6930/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6930/comments
https://api.github.com/repos/huggingface/datasets/issues/6930/events
https://github.com/huggingface/datasets/issues/6930
2,323,225,922
I_kwDODunzps6KeZ1C
6,930
ValueError: Couldn't infer the same data file format for all splits. Got {'train': ('json', {}), 'validation': (None, {})}
{ "avatar_url": "https://avatars.githubusercontent.com/u/41767521?v=4", "events_url": "https://api.github.com/users/Polarisamoon/events{/privacy}", "followers_url": "https://api.github.com/users/Polarisamoon/followers", "following_url": "https://api.github.com/users/Polarisamoon/following{/other_user}", "gists_url": "https://api.github.com/users/Polarisamoon/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Polarisamoon", "id": 41767521, "login": "Polarisamoon", "node_id": "MDQ6VXNlcjQxNzY3NTIx", "organizations_url": "https://api.github.com/users/Polarisamoon/orgs", "received_events_url": "https://api.github.com/users/Polarisamoon/received_events", "repos_url": "https://api.github.com/users/Polarisamoon/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Polarisamoon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Polarisamoon/subscriptions", "type": "User", "url": "https://api.github.com/users/Polarisamoon", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "How do you solve it ?\r\n", "> How do you solve it ?\r\n\r\nPlease check your Python environment and dataset version. I have just resolved the issue, which was caused by a Python environment switching error\r\n" ]
2024-05-29T12:40:05Z
2024-07-23T06:25:24Z
null
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug When I run the code en = load_dataset("allenai/c4", "en", streaming=True), I encounter an error: raise ValueError(f"Couldn't infer the same data file format for all splits. Got {split_modules}") ValueError: Couldn't infer the same data file format for all splits. Got {'train': ('json', {}), 'validation': (None, {})}. However, running dataset = load_dataset('allenai/c4', streaming=True, data_files={'validation': 'en/c4-validation.00003-of-00008.json.gz'}, split='validation') works fine. What is the issue here? ### Steps to reproduce the bug run code: import os os.environ['HF_ENDPOINT'] = 'https://hf-mirror.com' from datasets import load_dataset en = load_dataset("allenai/c4", "en", streaming=True) ### Expected behavior Successfully loaded the dataset. ### Environment info - `datasets` version: 2.18.0 - Platform: Linux-6.5.0-28-generic-x86_64-with-glibc2.17 - Python version: 3.8.19 - `huggingface_hub` version: 0.22.2 - PyArrow version: 15.0.2 - Pandas version: 2.0.3 - `fsspec` version: 2024.2.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6930/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6930/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6108
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6108/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6108/comments
https://api.github.com/repos/huggingface/datasets/issues/6108/events
https://github.com/huggingface/datasets/issues/6108
1,830,347,187
I_kwDODunzps5tGOGz
6,108
Loading local datasets got strangely stuck
{ "avatar_url": "https://avatars.githubusercontent.com/u/48412571?v=4", "events_url": "https://api.github.com/users/LoveCatc/events{/privacy}", "followers_url": "https://api.github.com/users/LoveCatc/followers", "following_url": "https://api.github.com/users/LoveCatc/following{/other_user}", "gists_url": "https://api.github.com/users/LoveCatc/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/LoveCatc", "id": 48412571, "login": "LoveCatc", "node_id": "MDQ6VXNlcjQ4NDEyNTcx", "organizations_url": "https://api.github.com/users/LoveCatc/orgs", "received_events_url": "https://api.github.com/users/LoveCatc/received_events", "repos_url": "https://api.github.com/users/LoveCatc/repos", "site_admin": false, "starred_url": "https://api.github.com/users/LoveCatc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LoveCatc/subscriptions", "type": "User", "url": "https://api.github.com/users/LoveCatc", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "Yesterday I waited for more than 12 hours to make sure it was really **stuck** instead of proceeding too slow.", "I've had similar weird issues with `load_dataset` as well. Not multiple files, but dataset is quite big, about 50G.", "We use a generic multiprocessing code, so there is little we can do about this - unfortunately, turning off multiprocessing seems to be the only solution. Multithreading would make our code easier to maintain and (most likely) avoid issues such as this one, but we cannot use it until the GIL is dropped (no-GIL Python should be released in 2024, so we can start exploring this then)", "The problem seems to be the `Generating train split`. Is it possible to avoid that? I have a dataset saved, just want to load it but somehow running into issues with that again.", "Hey guys, recently I ran into this problem again and I spent one whole day trying to locate the problem. Finally I found the problem seems to be with `pyarrow`'s json parser, and it seems a long-existing problem. Similar issue can be found in #2181. Anyway, my solution is to adjust the `load_dataset`'s parameter `chunksize`. You can inspect the parameter set in `datasets/packaged_modules/json/json.py`, now the actual chunksize should be very small, and you can increase the value. For me, `chunksize=10<<23` could solve the stuck problem. But I also find that too big `chunksize`, like `10 << 30`, would also cause a stuck, which is rather weird. I think I may explore this when I am free. And hope this can help those who also encounter the same problem. ", "Experiencing the same issue with the `kaist-ai/Feedback-Collection` dataset, which is comparatively small i.e. 100k rows.\r\nCode to reproduce\r\n\r\n```\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"kaist-ai/Feedback-Collection\")\r\n```\r\n\r\nI have tried setting `num_proc=1` as well as `chunksize=1024, 64` but problem persists. Any pointers?", "sorry to disturb, at datasets==2.21.0, I add `chunksize` parameter but got error \"doesn't have a 'chunksize' key\". Is it got removed?" ]
2023-08-01T02:28:06Z
2024-12-31T16:01:00Z
null
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug I try to use `load_dataset()` to load several local `.jsonl` files as a dataset. Every line of these files is a json structure only containing one key `text` (yeah it is a dataset for NLP model). The code snippet is as: ```python ds = load_dataset("json", data_files=LIST_OF_FILE_PATHS, num_proc=16)['train'] ``` However, I found that the loading process can get stuck -- the progress bar `Generating train split` no more proceed. When I was trying to find the cause and solution, I found a really strange behavior. If I load the dataset in this way: ```python dlist = list() for _ in LIST_OF_FILE_PATHS: dlist.append(load_dataset("json", data_files=_)['train']) ds = concatenate_datasets(dlist) ``` I can actually successfully load all the files despite its slow speed. But if I load them in batch like above, things go wrong. I did try to use Control-C to trace the stuck point but the program cannot be terminated in this way when `num_proc` is set to `None`. The only thing I can do is use Control-Z to hang it up then kill it. If I use more than 2 cpus, a Control-C would simply cause the following error: ```bash ^C Process ForkPoolWorker-1: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/multiprocess/process.py", line 314, in _bootstrap self.run() File "/usr/local/lib/python3.10/dist-packages/multiprocess/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/local/lib/python3.10/dist-packages/multiprocess/pool.py", line 114, in worker task = get() File "/usr/local/lib/python3.10/dist-packages/multiprocess/queues.py", line 368, in get res = self._reader.recv_bytes() File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 224, in recv_bytes buf = self._recv_bytes(maxlength) File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 422, in _recv_bytes buf = self._recv(4) File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 387, in _recv chunk = read(handle, remaining) KeyboardInterrupt Generating train split: 92431 examples [01:23, 1104.25 examples/s] Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 1373, in iflatmap_unordered yield queue.get(timeout=0.05) File "<string>", line 2, in get File "/usr/local/lib/python3.10/dist-packages/multiprocess/managers.py", line 818, in _callmethod kind, result = conn.recv() File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 258, in recv buf = self._recv_bytes() File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 422, in _recv_bytes buf = self._recv(4) File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 387, in _recv chunk = read(handle, remaining) KeyboardInterrupt During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/mnt/data/liyongyuan/source/batch_load.py", line 11, in <module> a = load_dataset( File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 2133, in load_dataset builder_instance.download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 954, in download_and_prepare self._download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1049, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1842, in _prepare_split for job_id, done, content in iflatmap_unordered( File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 1387, in iflatmap_unordered [async_result.get(timeout=0.05) for async_result in async_results] File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 1387, in <listcomp> [async_result.get(timeout=0.05) for async_result in async_results] File "/usr/local/lib/python3.10/dist-packages/multiprocess/pool.py", line 770, in get raise TimeoutError multiprocess.context.TimeoutError ``` I have validated the basic correctness of these `.jsonl` files. They are correctly formatted (or they cannot be loaded singly by `load_dataset`) though some of the json may contain too long text (more than 1e7 characters). I do not know if this could be the problem. And there should not be any bottleneck in system's resource. The whole dataset is ~300GB, and I am using a cloud server with plenty of storage and 1TB ram. Thanks for your efforts and patience! Any suggestion or help would be appreciated. ### Steps to reproduce the bug 1. use load_dataset() with `data_files = LIST_OF_FILES` ### Expected behavior All the files should be smoothly loaded. ### Environment info - Datasets: A private dataset. ~2500 `.jsonl` files. ~300GB in total. Each json structure only contains one key: `text`. Format checked. - `datasets` version: 2.14.2 - Platform: Linux-4.19.91-014.kangaroo.alios7.x86_64-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.15.1 - PyArrow version: 10.0.1.dev0+ga6eabc2b.d20230609 - Pandas version: 1.5.2
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6108/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6108/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6773
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6773/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6773/comments
https://api.github.com/repos/huggingface/datasets/issues/6773/events
https://github.com/huggingface/datasets/issues/6773
2,221,049,121
I_kwDODunzps6EYoUh
6,773
Dataset on Hub re-downloads every time?
{ "avatar_url": "https://avatars.githubusercontent.com/u/9099139?v=4", "events_url": "https://api.github.com/users/manestay/events{/privacy}", "followers_url": "https://api.github.com/users/manestay/followers", "following_url": "https://api.github.com/users/manestay/following{/other_user}", "gists_url": "https://api.github.com/users/manestay/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/manestay", "id": 9099139, "login": "manestay", "node_id": "MDQ6VXNlcjkwOTkxMzk=", "organizations_url": "https://api.github.com/users/manestay/orgs", "received_events_url": "https://api.github.com/users/manestay/received_events", "repos_url": "https://api.github.com/users/manestay/repos", "site_admin": false, "starred_url": "https://api.github.com/users/manestay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/manestay/subscriptions", "type": "User", "url": "https://api.github.com/users/manestay", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The caching works as expected when I try to reproduce this locally or on Colab...", "hi @mariosasko , Thank you for checking. I also tried running this again just now, and it seems like the `load_dataset()` caches properly (though I'll double check later).\r\n\r\nI think the issue might be in the caching of the function output for `territories.map(lambda row: {'Claimants': row['Claimants'].split(';')})`. My current run re-ran this, even though I have run this many times before, and as demonstrated by loading from cache, the loaded dataset is the same.\r\n\r\nI wonder if the issue stems from using CSV output. Do you recommend changing to Parquet, and if so, is there an easy way to take the already uploaded data on the Hub and reformat?", "This issue seems similar to https://github.com/huggingface/datasets/issues/6184 (`dill` serializes objects defined outside the `__main__` module by reference). You should be able to work around this limitation by defining the lambdas outside of `load_borderlines_hf` (as module variables) and then setting their `__module__` attribute's value to `None` to force serializing them by value, e.g., like this: \r\n```python\r\nsplit_Claimants_row = lambda row: {'Claimants': row['Claimants'].split(';')}\r\nsplit_Claimants_row.__module__ = None\r\n```", "Thank you, I'll give this a try. Your fix makes sense to me, so this issue can be closed for now.\r\n\r\nUnrelated comment -- for \"Downloads last month\" on the hub page, I'm assuming for this project that each downloaded CSV is 1 download? The dataset consists of 51 CSVs, so I'm trying to see why it's incrementing so quickly (1125 2 days ago, 1246 right now).", "This doc explains how we count \"Downloads last month\": https://huggingface.co/docs/hub/datasets-download-stats" ]
2024-04-02T17:23:22Z
2024-04-08T18:43:45Z
2024-04-08T18:43:45Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug Hi, I have a dataset on the hub [here](https://huggingface.co/datasets/manestay/borderlines). It has 1k+ downloads, which I sure is mostly just me and my colleagues working with it. It should have far fewer, since I'm using the same machine with a properly set up HF_HOME variable. However, whenever I run the below function `load_borderlines_hf`, it downloads the entire dataset from the hub and then does the other logic: https://github.com/manestay/borderlines/blob/4e161f444661e2ebfe643f3fe149d9258d63a57d/run_gpt/lib.py#L80 Let me know what I'm doing wrong here, or if it's a bug with the `datasets` library itself. On the hub I have my data stored in CSVs, but several columns are lists, so that's why I have the code to map splitting on `;`. I looked into dataset loading scripts, but it seemed difficult to set up. I have verified that other `datasets` and `models` on my system are using the cache properly (e.g. I have a 13B parameter model and large datasets, but those are cached and don't redownload). __EDIT: __ as pointed out in the discussion below, it may be the `map()` calls that aren't being cached properly. Supposing the `load_dataset()` retrieve from the cache, then it should be the case that the `map()` calls also retrieve from the cached output. But the `map()` commands re-execute sometimes. ### Steps to reproduce the bug 1. Copy and paste the function from [here](https://github.com/manestay/borderlines/blob/4e161f444661e2ebfe643f3fe149d9258d63a57d/run_gpt/lib.py#L80) (lines 80-100) 2. Run it in Python `load_borderlines_hf(None)` 3. It completes successfully, downloading from HF hub, then doing the mapping logic etc. 4. If you run it again after some time, it will re-download, ignoring the cache ### Expected behavior Re-running the code, which calls `datasets.load_dataset('manestay/borderlines', 'territories')`, should use the cached version ### Environment info - `datasets` version: 2.16.1 - Platform: Linux-5.14.21-150500.55.7-default-x86_64-with-glibc2.31 - Python version: 3.10.13 - `huggingface_hub` version: 0.20.3 - PyArrow version: 15.0.0 - Pandas version: 1.5.3 - `fsspec` version: 2023.10.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/9099139?v=4", "events_url": "https://api.github.com/users/manestay/events{/privacy}", "followers_url": "https://api.github.com/users/manestay/followers", "following_url": "https://api.github.com/users/manestay/following{/other_user}", "gists_url": "https://api.github.com/users/manestay/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/manestay", "id": 9099139, "login": "manestay", "node_id": "MDQ6VXNlcjkwOTkxMzk=", "organizations_url": "https://api.github.com/users/manestay/orgs", "received_events_url": "https://api.github.com/users/manestay/received_events", "repos_url": "https://api.github.com/users/manestay/repos", "site_admin": false, "starred_url": "https://api.github.com/users/manestay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/manestay/subscriptions", "type": "User", "url": "https://api.github.com/users/manestay", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6773/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6773/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/6791
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6791/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6791/comments
https://api.github.com/repos/huggingface/datasets/issues/6791/events
https://github.com/huggingface/datasets/issues/6791
2,230,102,332
I_kwDODunzps6E7Kk8
6,791
`add_faiss_index` raises ValueError: not enough values to unpack (expected 2, got 1)
{ "avatar_url": "https://avatars.githubusercontent.com/u/40491005?v=4", "events_url": "https://api.github.com/users/NeuralFlux/events{/privacy}", "followers_url": "https://api.github.com/users/NeuralFlux/followers", "following_url": "https://api.github.com/users/NeuralFlux/following{/other_user}", "gists_url": "https://api.github.com/users/NeuralFlux/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/NeuralFlux", "id": 40491005, "login": "NeuralFlux", "node_id": "MDQ6VXNlcjQwNDkxMDA1", "organizations_url": "https://api.github.com/users/NeuralFlux/orgs", "received_events_url": "https://api.github.com/users/NeuralFlux/received_events", "repos_url": "https://api.github.com/users/NeuralFlux/repos", "site_admin": false, "starred_url": "https://api.github.com/users/NeuralFlux/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NeuralFlux/subscriptions", "type": "User", "url": "https://api.github.com/users/NeuralFlux", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "I realized I was passing a string column to this instead of float. Is it possible to add a warning or error to prevent users from falsely believing there's a bug?", "Hello!\r\n\r\nI agree that we could add some safeguards around the type of `ds[column]`. At least for FAISS, we need the column to be made of embeddings as FAISS doesn't perform the embeddings itself.\r\n\r\nI can propose a PR sometime this week.", "@Dref360 thanks for the initiative!" ]
2024-04-08T01:57:03Z
2024-04-11T15:38:05Z
2024-04-11T15:38:05Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug Calling `add_faiss_index` on a `Dataset` with a column argument raises a ValueError. The following is the trace ```python 214 def replacement_add(self, x): 215 """Adds vectors to the index. 216 The index must be trained before vectors can be added to it. 217 The vectors are implicitly numbered in sequence. When `n` vectors are (...) 224 `dtype` must be float32. 225 """ --> 227 n, d = x.shape 228 assert d == self.d 229 x = np.ascontiguousarray(x, dtype='float32') ValueError: not enough values to unpack (expected 2, got 1) ``` ### Steps to reproduce the bug 1. Load any dataset like `ds = datasets.load_dataset("wikimedia/wikipedia", "20231101.en")["train"]` 2. Add an FAISS index on any column `ds.add_faiss_index('title')` ### Expected behavior The index should be created ### Environment info - `datasets` version: 2.18.0 - Platform: Linux-6.5.0-26-generic-x86_64-with-glibc2.35 - Python version: 3.9.19 - `huggingface_hub` version: 0.22.2 - PyArrow version: 15.0.2 - Pandas version: 2.2.1 - `fsspec` version: 2024.2.0 - `faiss-cpu` version: 1.8.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6791/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6791/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/5178
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5178/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5178/comments
https://api.github.com/repos/huggingface/datasets/issues/5178/events
https://github.com/huggingface/datasets/issues/5178
1,430,800,810
I_kwDODunzps5VSEmq
5,178
Unable to download the Chinese `wikipedia`, the dumpstatus.json not found!
{ "avatar_url": "https://avatars.githubusercontent.com/u/37113676?v=4", "events_url": "https://api.github.com/users/beyondguo/events{/privacy}", "followers_url": "https://api.github.com/users/beyondguo/followers", "following_url": "https://api.github.com/users/beyondguo/following{/other_user}", "gists_url": "https://api.github.com/users/beyondguo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/beyondguo", "id": 37113676, "login": "beyondguo", "node_id": "MDQ6VXNlcjM3MTEzNjc2", "organizations_url": "https://api.github.com/users/beyondguo/orgs", "received_events_url": "https://api.github.com/users/beyondguo/received_events", "repos_url": "https://api.github.com/users/beyondguo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/beyondguo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/beyondguo/subscriptions", "type": "User", "url": "https://api.github.com/users/beyondguo", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
null
[ "In the dumps page of the wiki (https://dumps.wikimedia.org/zhwiki/), I found the following dumps:\r\n```\r\nIndex of /zhwiki/\r\n[../](https://dumps.wikimedia.org/)\r\n[20220701/](https://dumps.wikimedia.org/zhwiki/20220701/) 21-Aug-2022 01:48 -\r\n[20220720/](https://dumps.wikimedia.org/zhwiki/20220720/) 02-Sep-2022 01:48 -\r\n[20220801/](https://dumps.wikimedia.org/zhwiki/20220801/) 21-Sep-2022 01:44 -\r\n[20220820/](https://dumps.wikimedia.org/zhwiki/20220820/) 01-Oct-2022 09:39 -\r\n[20220901/](https://dumps.wikimedia.org/zhwiki/20220901/) 20-Oct-2022 09:44 -\r\n[20220920/](https://dumps.wikimedia.org/zhwiki/20220920/) 23-Sep-2022 12:06 -\r\n[20221001/](https://dumps.wikimedia.org/zhwiki/20221001/) 04-Oct-2022 15:10 -\r\n[20221020/](https://dumps.wikimedia.org/zhwiki/20221020/) 01-Nov-2022 03:15 -\r\n[latest/](https://dumps.wikimedia.org/zhwiki/latest/) 01-Nov-2022 03:15 -\r\n```\r\n\r\nMaybe the older dumps are not available which caused the downloading failure? \r\n\r\nHowever, when I changed to the newer version:\r\n```\r\ndata = load_dataset('wikipedia', '20220701.zh', beam_runner='DirectRunner')\r\n```\r\n\r\nit shows:\r\n```\r\nValueError: BuilderConfig 20220701.zh not found. Available: ['20220301.aa', '20220301.ab', '20220301.ace', '20220301.ady', '20220301.af', '20220301.ak', '20220301.als', '20220301.am', '20220301.an', '20220301.ang', '20220301.ar', '20220301.arc', '20220301.arz', '20220301.as', '20220301.ast', '20220301.atj', '20220301.av', '20220301.ay', '20220301.az', '20220301.azb', '20220301.ba', '20220301.bar', '20220301.bat-smg', '20220301.bcl', '20220301.be', '20220301.be-x-old', '20220301.bg', '20220301.bh', '20220301.bi', '20220301.bjn', '20220301.bm', '20220301.bn', '20220301.bo', '20220301.bpy', '20220301.br', '20220301.bs', '20220301.bug', '20220301.bxr', '20220301.ca', '20220301.cbk-zam', '20220301.cdo', '20220301.ce', '20220301.ceb', '20220301.ch', '20220301.cho', '20220301.chr', '20220301.chy', '20220301.ckb', '20220301.co', '20220301.cr', '20220301.crh', '20220301.cs', '20220301.csb', '20220301.cu', '20220301.cv', '20220301.cy', '20220301.da', '20220301.de', '20220301.din', '20220301.diq', '20220301.dsb', '20220301.dty', '20220301.dv', '20220301.dz', '20220301.ee', '20220301.el', '20220301.eml', '20220301.en', '20220301.eo', '20220301.es', '20220301.et', '20220301.eu', '20220301.ext', '20220301.fa', '20220301.ff', '20220301.fi', '20220301.fiu-vro', '20220301.fj', '20220301.fo', '20220301.fr', '20220301.frp', '20220301.frr', '20220301.fur', '20220301.fy', '20220301.ga', '20220301.gag', '20220301.gan', '20220301.gd', '20220301.gl', '20220301.glk', '20220301.gn', '20220301.gom', '20220301.gor', '20220301.got', '20220301.gu', '20220301.gv', '20220301.ha', '20220301.hak', '20220301.haw', '20220301.he', '20220301.hi', '20220301.hif', '20220301.ho', '20220301.hr', '20220301.hsb', '20220301.ht', '20220301.hu', '20220301.hy', '20220301.ia', '20220301.id', '20220301.ie', '20220301.ig', '20220301.ii', '20220301.ik', '20220301.ilo', '20220301.inh', '20220301.io', '20220301.is', '20220301.it', '20220301.iu', '20220301.ja', '20220301.jam', '20220301.jbo', '20220301.jv', '20220301.ka', '20220301.kaa', '20220301.kab', '20220301.kbd', '20220301.kbp', '20220301.kg', '20220301.ki', '20220301.kj', '20220301.kk', '20220301.kl', '20220301.km', '20220301.kn', '20220301.ko', '20220301.koi', '20220301.krc', '20220301.ks', '20220301.ksh', '20220301.ku', '20220301.kv', '20220301.kw', '20220301.ky', '20220301.la', '20220301.lad', '20220301.lb', '20220301.lbe', '20220301.lez', '20220301.lfn', '20220301.lg', '20220301.li', '20220301.lij', '20220301.lmo', '20220301.ln', '20220301.lo', '20220301.lrc', '20220301.lt', '20220301.ltg', '20220301.lv', '20220301.mai', '20220301.map-bms', '20220301.mdf', '20220301.mg', '20220301.mh', '20220301.mhr', '20220301.mi', '20220301.min', '20220301.mk', '20220301.ml', '20220301.mn', '20220301.mr', '20220301.mrj', '20220301.ms', '20220301.mt', '20220301.mus', '20220301.mwl', '20220301.my', '20220301.myv', '20220301.mzn', '20220301.na', '20220301.nah', '20220301.nap', '20220301.nds', '20220301.nds-nl', '20220301.ne', '20220301.new', '20220301.ng', '20220301.nl', '20220301.nn', '20220301.no', '20220301.nov', '20220301.nrm', '20220301.nso', '20220301.nv', '20220301.ny', '20220301.oc', '20220301.olo', '20220301.om', '20220301.or', '20220301.os', '20220301.pa', '20220301.pag', '20220301.pam', '20220301.pap', '20220301.pcd', '20220301.pdc', '20220301.pfl', '20220301.pi', '20220301.pih', '20220301.pl', '20220301.pms', '20220301.pnb', '20220301.pnt', '20220301.ps', '20220301.pt', '20220301.qu', '20220301.rm', '20220301.rmy', '20220301.rn', '20220301.ro', '20220301.roa-rup', '20220301.roa-tara', '20220301.ru', '20220301.rue', '20220301.rw', '20220301.sa', '20220301.sah', '20220301.sat', '20220301.sc', '20220301.scn', '20220301.sco', '20220301.sd', '20220301.se', '20220301.sg', '20220301.sh', '20220301.si', '20220301.simple', '20220301.sk', '20220301.sl', '20220301.sm', '20220301.sn', '20220301.so', '20220301.sq', '20220301.sr', '20220301.srn', '20220301.ss', '20220301.st', '20220301.stq', '20220301.su', '20220301.sv', '20220301.sw', '20220301.szl', '20220301.ta', '20220301.tcy', '20220301.te', '20220301.tet', '20220301.tg', '20220301.th', '20220301.ti', '20220301.tk', '20220301.tl', '20220301.tn', '20220301.to', '20220301.tpi', '20220301.tr', '20220301.ts', '20220301.tt', '20220301.tum', '20220301.tw', '20220301.ty', '20220301.tyv', '20220301.udm', '20220301.ug', '20220301.uk', '20220301.ur', '20220301.uz', '20220301.ve', '20220301.vec', '20220301.vep', '20220301.vi', '20220301.vls', '20220301.vo', '20220301.wa', '20220301.war', '20220301.wo', '20220301.wuu', '20220301.xal', '20220301.xh', '20220301.xmf', '20220301.yi', '20220301.yo', '20220301.za', '20220301.zea', '20220301.zh', '20220301.zh-classical', '20220301.zh-min-nan', '20220301.zh-yue', '20220301.zu']\r\n```\r\n\r\nSo I guess adding the latest dumps versions to the `BuilderConfig` may solve the problem? But how to add it?", "Hi, @beyondguo, thanks for reporting.\r\n\r\nYou have all the information in the dataset card: https://huggingface.co/datasets/wikipedia\r\n\r\n> Then, you can load any subset of Wikipedia per language and per date this way:\r\n> ```python\r\n> from datasets import load_dataset\r\n> \r\n> load_dataset(\"wikipedia\", language=\"sw\", date=\"20220120\", beam_runner=...) \r\n> ```\r\n> where you can pass as beam_runner any Apache Beam supported runner for (distributed) data processing (see [here](https://beam.apache.org/documentation/runners/capability-matrix/)). Pass \"DirectRunner\" to run it on your machine.\r\n> \r\n> You can find the full list of languages and dates [here](https://dumps.wikimedia.org/backup-index.html).\r\n\r\nNote that you have to pass the language and date as keyword arguments, and the available dates depend on the language and can be found on Wikimedia website.", "Also:\r\n> Some subsets of Wikipedia have already been processed by HuggingFace, and you can load them just with:\r\n> ```python\r\n> load_dataset(\"wikipedia\", \"20220301.en\")\r\n> ```\r\n> The list of pre-processed subsets is:\r\n> - \"20220301.de\"\r\n> - \"20220301.en\"\r\n> - \"20220301.fr\"\r\n> - \"20220301.frr\"\r\n> - \"20220301.it\"\r\n> - \"20220301.simple\"" ]
2022-11-01T03:17:55Z
2022-11-02T08:27:15Z
2022-11-02T08:24:29Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug I tried: `data = load_dataset('wikipedia', '20220301.zh', beam_runner='DirectRunner')` and `data = load_dataset("wikipedia", language="zh", date="20220301", beam_runner='DirectRunner')` but both got: `FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/zhwiki/20220301/dumpstatus.json` the full report is: ``` FileNotFoundError Traceback (most recent call last) <ipython-input-13-d07c5021090c> in <module> 1 from datasets import load_dataset 2 ----> 3 data = load_dataset("wikipedia", language="zh", date="20220301", beam_runner='DirectRunner')<?, ?it/s] /opt/conda/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1740 1741 # Download and prepare data -> 1742 builder_instance.download_and_prepare( 1743 download_config=download_config, 1744 download_mode=download_mode, /opt/conda/lib/python3.8/site-packages/datasets/builder.py in download_and_prepare(self, output_dir, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, storage_options, **download_and_prepare_kwargs) 812 **download_and_prepare_kwargs, 813 } --> 814 self._download_and_prepare( 815 dl_manager=dl_manager, 816 verify_infos=verify_infos, /opt/conda/lib/python3.8/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_splits_kwargs) 1645 options=beam_options, 1646 ) -> 1647 super()._download_and_prepare( 1648 dl_manager, verify_infos=False, pipeline=pipeline, **prepare_splits_kwargs 1649 ) # TODO handle verify_infos in beam datasets /opt/conda/lib/python3.8/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 881 split_dict = SplitDict(dataset_name=self.name) 882 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 883 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 884 885 # Checksums verification ~/.cache/huggingface/modules/datasets_modules/datasets/wikipedia/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559/wikipedia.py in _split_generators(self, dl_manager, pipeline) 943 info_url = _base_url(lang) + _INFO_FILE 944 # Use dictionary since testing mock always returns the same result. --> 945 downloaded_files = dl_manager.download_and_extract({"info": info_url}) 946 947 xml_urls = [] /opt/conda/lib/python3.8/site-packages/datasets/download/download_manager.py in download_and_extract(self, url_or_urls) 431 extracted_path(s): `str`, extracted paths of given URL(s). 432 """ --> 433 return self.extract(self.download(url_or_urls)) 434 435 def get_recorded_sizes_checksums(self): /opt/conda/lib/python3.8/site-packages/datasets/download/download_manager.py in download(self, url_or_urls) 308 309 start_time = datetime.now() --> 310 downloaded_path_or_paths = map_nested( 311 download_func, 312 url_or_urls, /opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, parallel_min_length, types, disable_tqdm, desc) 427 num_proc = 1 428 if num_proc <= 1 or len(iterable) < parallel_min_length: --> 429 mapped = [ 430 _single_map_nested((function, obj, types, None, True, None)) 431 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc) /opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py in <listcomp>(.0) 428 if num_proc <= 1 or len(iterable) < parallel_min_length: 429 mapped = [ --> 430 _single_map_nested((function, obj, types, None, True, None)) 431 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc) 432 ] /opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py in _single_map_nested(args) 329 # Singleton first to spare some computation 330 if not isinstance(data_struct, dict) and not isinstance(data_struct, types): --> 331 return function(data_struct) 332 333 # Reduce logging to keep things readable in multiprocessing with tqdm /opt/conda/lib/python3.8/site-packages/datasets/download/download_manager.py in _download(self, url_or_filename, download_config) 335 # append the relative path to the base_path 336 url_or_filename = url_or_path_join(self._base_path, url_or_filename) --> 337 return cached_path(url_or_filename, download_config=download_config) 338 339 def iter_archive(self, path_or_buf: Union[str, io.BufferedReader]): /opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) 186 if is_remote_url(url_or_filename): 187 # URL, so get it from the cache (downloading if necessary) --> 188 output_path = get_from_cache( 189 url_or_filename, 190 cache_dir=cache_dir, /opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token, ignore_url_params, download_desc) 533 ) 534 elif response is not None and response.status_code == 404: --> 535 raise FileNotFoundError(f"Couldn't find file at {url}") 536 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}") 537 if head_error is not None: FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/zhwiki/20220301/dumpstatus.json ``` ### Steps to reproduce the bug `data = load_dataset('wikipedia', '20220301.zh', beam_runner='DirectRunner')` ### Expected behavior download the data ### Environment info python3.6 latest datasets/transformers version
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5178/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5178/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/5187
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5187/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5187/comments
https://api.github.com/repos/huggingface/datasets/issues/5187/events
https://github.com/huggingface/datasets/pull/5187
1,432,375,375
PR_kwDODunzps5CBE08
5,187
chore: add notebook links to img cls and obj det.
{ "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sayakpaul", "id": 22957388, "login": "sayakpaul", "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "repos_url": "https://api.github.com/users/sayakpaul/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "type": "User", "url": "https://api.github.com/users/sayakpaul", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sayakpaul", "id": 22957388, "login": "sayakpaul", "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "repos_url": "https://api.github.com/users/sayakpaul/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "type": "User", "url": "https://api.github.com/users/sayakpaul", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sayakpaul", "id": 22957388, "login": "sayakpaul", "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "repos_url": "https://api.github.com/users/sayakpaul/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "type": "User", "url": "https://api.github.com/users/sayakpaul", "user_view_type": "public" } ]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "@nateraw I guess the failing test is unrelated. ", "@sayakpaul Yea failures are unrelated. ", "Alright. Will wait for @osanseviero's take and then merge. ", "FYI @stevhliu ", "@osanseviero @stevhliu @nateraw thank you for your comments. Acted on them.", "Thanks! Can I merge? Or should we wait for approvals from the others?", "Since @stevhliu approved as well, I think you're good to go", "Alright!\r\n\r\nMerging as a Member for the first time πŸ«€" ]
2022-11-02T02:30:09Z
2022-11-03T01:52:24Z
2022-11-03T01:49:56Z
MEMBER
null
null
null
Closes https://github.com/huggingface/datasets/issues/5182
{ "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sayakpaul", "id": 22957388, "login": "sayakpaul", "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "repos_url": "https://api.github.com/users/sayakpaul/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "type": "User", "url": "https://api.github.com/users/sayakpaul", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5187/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5187/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5187.diff", "html_url": "https://github.com/huggingface/datasets/pull/5187", "merged_at": "2022-11-03T01:49:56Z", "patch_url": "https://github.com/huggingface/datasets/pull/5187.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5187" }
https://api.github.com/repos/huggingface/datasets/issues/4622
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4622/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4622/comments
https://api.github.com/repos/huggingface/datasets/issues/4622/events
https://github.com/huggingface/datasets/pull/4622
1,293,031,939
PR_kwDODunzps46ynmT
4,622
Fix ImageFolder with parameters drop_metadata=True and drop_labels=False (when metadata.jsonl is present)
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "@lhoestq @mariosasko pls take a look at https://github.com/huggingface/datasets/pull/4622/commits/769e4c046a5bd5e3a4dbd09cfad1f4cf60677869. I modified `_generate_examples()` according to the same logic too: removed checking if `metadata_files` are not empty for the case when `self.config.drop_metadata=True` because I think we should be aligned with the config and preserve labels if `self.config.drop_labels=False` (the default value) and `self.config.drop_metadata=True` but `metadata_files` are passed. This is an extremely unlikely use case (when `self.config.drop_metadata=True`, but `metadata_files` are passed to `_generate_examples()`) since users usually do not use `_generate_examples()` alone but I believe it would be consistent to have the same behavior as in `_splits_generators()`. This change requires change in tests too if we suppose that we want to preserve labels (default value of `self.config.drop_labels` is False) when `self.config.drop_metadata=True`, even if `metadata_files` are for some reason provided (as it is done in tests). \r\n\r\nwdyt about this change?\r\n", "@lhoestq it wouldn't raise an error if we check `example.keys() == {\"image\", \"label\"}` as test checks only `_generate_examples`, not `encode_example`. and in the implementation of this PR `_generate_examples` would return both `image` and `label` key in the case when `drop_metadata=True` and `drop_labels=False` (default) as it seems that we agreed on that :)", "and on the other hand it would raise an error if `label` column is missing in _generate_examples when `drop_metadata=True` and `drop_labels=False`\r\n\r\nby \"it\" i mean tests :D (`test_generate_examples_with_metadata_that_misses_one_image`, `test_generate_examples_with_metadata_in_wrong_location` and `test_generate_examples_drop_metadata`)", "Perhaps we could make `self.config.drop_metadata = None` and `self.config.drop_labels = None` the defaults to see explicitly what the user wants. This would then turn into `self.config.drop_metadata = False` and `self.config.drop_labels = True` if metadata files are present and `self.config.drop_metadata = True` and `self.config.drop_labels = False` if not. And if the user wants to have the `label` column alongside metadata columns, it can do so by passing `drop_labels = False` explicitely (in that scenario we have to check that the `label` column is not already present in metadata files). And maybe we can also improve the logging messages.\r\n\r\nI find it problematic that the current implementation drops labels in some scenarios even if `self.config.drop_labels = False`, and the user doesn't have control over this behavior.\r\n\r\nLet me know what you think." ]
2022-07-04T11:23:20Z
2022-07-15T14:37:23Z
2022-07-15T14:24:24Z
CONTRIBUTOR
null
null
null
Will fix #4621 ImageFolder raises `KeyError: 'label'` with params `drop_metadata=True` and `drop_labels=False` (if there is at least one metadata.jsonl file a data directory). This happens because metadata files are collected inside `analyze()` function regardless of `drop_metadata` value. And then the following condition doesn't pass: https://github.com/huggingface/datasets/blob/master/src/datasets/packaged_modules/imagefolder/imagefolder.py#L167 So I suggest to double check it inside `analyze()` not to collect metadata files if they are not needed. (and labels too, to be consistent) --- Also, I added a test to check if labels are inferred correctly from directories names in general (because we didn't have it) :)
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4622/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4622/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4622.diff", "html_url": "https://github.com/huggingface/datasets/pull/4622", "merged_at": "2022-07-15T14:24:24Z", "patch_url": "https://github.com/huggingface/datasets/pull/4622.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4622" }
https://api.github.com/repos/huggingface/datasets/issues/6650
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6650/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6650/comments
https://api.github.com/repos/huggingface/datasets/issues/6650/events
https://github.com/huggingface/datasets/issues/6650
2,125,680,991
I_kwDODunzps5-s1Ff
6,650
AttributeError: 'InMemoryTable' object has no attribute '_batches'
{ "avatar_url": "https://avatars.githubusercontent.com/u/13874772?v=4", "events_url": "https://api.github.com/users/matsuobasho/events{/privacy}", "followers_url": "https://api.github.com/users/matsuobasho/followers", "following_url": "https://api.github.com/users/matsuobasho/following{/other_user}", "gists_url": "https://api.github.com/users/matsuobasho/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/matsuobasho", "id": 13874772, "login": "matsuobasho", "node_id": "MDQ6VXNlcjEzODc0Nzcy", "organizations_url": "https://api.github.com/users/matsuobasho/orgs", "received_events_url": "https://api.github.com/users/matsuobasho/received_events", "repos_url": "https://api.github.com/users/matsuobasho/repos", "site_admin": false, "starred_url": "https://api.github.com/users/matsuobasho/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/matsuobasho/subscriptions", "type": "User", "url": "https://api.github.com/users/matsuobasho", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "Hi! Does running the following code also return the same error on your machine? \r\n\r\n```python\r\nimport copy\r\nimport pyarrow as pa\r\nfrom datasets.table import InMemoryTable\r\n\r\ncopy.deepcopy(InMemoryTable(pa.table({\"a\": [1, 2, 3], \"b\": [\"foo\", \"bar\", \"foobar\"]})))\r\n```", "No, it doesn't, it runs fine. But what's really strange is that the error just went away after I reran the data prep script for conversion from csv to a datasets object. I realize that's not very helpful since the problem isn't reproducible. ", "Feel free to close the issue then :)." ]
2024-02-08T17:11:26Z
2024-02-21T00:34:41Z
null
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug ``` Traceback (most recent call last): File "finetune.py", line 103, in <module> main(args) File "finetune.py", line 45, in main data_tokenized = data.map(partial(funcs.tokenize_function, tokenizer, File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/dataset_dict.py", line 868, in map { File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/dataset_dict.py", line 869, in <dictcomp> k: dataset.map( File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 592, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 557, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3093, in map for rank, done, content in Dataset._map_single(**dataset_kwargs): File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3432, in _map_single arrow_formatted_shard = shard.with_format("arrow") File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2667, in with_format dataset = copy.deepcopy(self) File "/opt/conda/envs/ptca/lib/python3.8/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "/opt/conda/envs/ptca/lib/python3.8/copy.py", line 270, in _reconstruct state = deepcopy(state, memo) File "/opt/conda/envs/ptca/lib/python3.8/copy.py", line 146, in deepcopy y = copier(x, memo) File "/opt/conda/envs/ptca/lib/python3.8/copy.py", line 230, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "/opt/conda/envs/ptca/lib/python3.8/copy.py", line 153, in deepcopy y = copier(memo) File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/table.py", line 176, in __deepcopy__ memo[id(self._batches)] = list(self._batches) AttributeError: 'InMemoryTable' object has no attribute '_batches' ``` ### Steps to reproduce the bug I'm running an MLOps flow using AzureML. The error appears when I run the following function in my training script: ```python data_tokenized = data.map(partial(funcs.tokenize_function, tokenizer, seq_length), batched=True, batch_size=batch_size, remove_columns=['col1', 'col2']) ``` ```python def tokenize_function(tok, seq_length, example) # Pad so that each batch has the same sequence length inp = tok(example['col1'], padding=True, truncation=True) outp = tok(example['col2'], padding="max_length", max_length=seq_length) res = { 'input_ids': inp['input_ids'], 'attention_mask': inp['attention_mask'], 'decoder_input_ids': outp['input_ids'], 'labels': outp['input_ids'], 'decoder_attention_mask': outp['attention_mask'] } return res ``` ### Expected behavior Processing proceeds without errors. I ran this same workflow 2 weeks ago without a problem. I recreated the environment since then but it doesn't appear that datasets versions have changed since Dec. '23. ### Environment info datasets 2.16.1 transformers 4.35.2 pyarrow 15.0.0 pyarrow-hotfix 0.6 torch 2.0.1 I'm not using the latest transformers version because there was an error due to a conflict with Azure mlflow when I tried the last time.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6650/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6650/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/5813
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5813/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5813/comments
https://api.github.com/repos/huggingface/datasets/issues/5813/events
https://github.com/huggingface/datasets/pull/5813
1,691,908,535
PR_kwDODunzps5Pj0_E
5,813
[DO-NOT-MERGE] Debug Windows issue at #3
{ "avatar_url": "https://avatars.githubusercontent.com/u/6477701?v=4", "events_url": "https://api.github.com/users/HyukjinKwon/events{/privacy}", "followers_url": "https://api.github.com/users/HyukjinKwon/followers", "following_url": "https://api.github.com/users/HyukjinKwon/following{/other_user}", "gists_url": "https://api.github.com/users/HyukjinKwon/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/HyukjinKwon", "id": 6477701, "login": "HyukjinKwon", "node_id": "MDQ6VXNlcjY0Nzc3MDE=", "organizations_url": "https://api.github.com/users/HyukjinKwon/orgs", "received_events_url": "https://api.github.com/users/HyukjinKwon/received_events", "repos_url": "https://api.github.com/users/HyukjinKwon/repos", "site_admin": false, "starred_url": "https://api.github.com/users/HyukjinKwon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HyukjinKwon/subscriptions", "type": "User", "url": "https://api.github.com/users/HyukjinKwon", "user_view_type": "public" }
[]
closed
false
null
[]
null
[]
2023-05-02T07:19:34Z
2023-05-02T07:21:30Z
2023-05-02T07:21:30Z
NONE
null
null
null
TBD
{ "avatar_url": "https://avatars.githubusercontent.com/u/6477701?v=4", "events_url": "https://api.github.com/users/HyukjinKwon/events{/privacy}", "followers_url": "https://api.github.com/users/HyukjinKwon/followers", "following_url": "https://api.github.com/users/HyukjinKwon/following{/other_user}", "gists_url": "https://api.github.com/users/HyukjinKwon/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/HyukjinKwon", "id": 6477701, "login": "HyukjinKwon", "node_id": "MDQ6VXNlcjY0Nzc3MDE=", "organizations_url": "https://api.github.com/users/HyukjinKwon/orgs", "received_events_url": "https://api.github.com/users/HyukjinKwon/received_events", "repos_url": "https://api.github.com/users/HyukjinKwon/repos", "site_admin": false, "starred_url": "https://api.github.com/users/HyukjinKwon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HyukjinKwon/subscriptions", "type": "User", "url": "https://api.github.com/users/HyukjinKwon", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5813/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5813/timeline
null
null
1
{ "diff_url": "https://github.com/huggingface/datasets/pull/5813.diff", "html_url": "https://github.com/huggingface/datasets/pull/5813", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5813.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5813" }
https://api.github.com/repos/huggingface/datasets/issues/6444
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6444/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6444/comments
https://api.github.com/repos/huggingface/datasets/issues/6444/events
https://github.com/huggingface/datasets/pull/6444
2,006,842,179
PR_kwDODunzps5gKG_e
6,444
Remove `Table.__getstate__` and `Table.__setstate__`
{ "avatar_url": "https://avatars.githubusercontent.com/u/36994684?v=4", "events_url": "https://api.github.com/users/LZHgrla/events{/privacy}", "followers_url": "https://api.github.com/users/LZHgrla/followers", "following_url": "https://api.github.com/users/LZHgrla/following{/other_user}", "gists_url": "https://api.github.com/users/LZHgrla/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/LZHgrla", "id": 36994684, "login": "LZHgrla", "node_id": "MDQ6VXNlcjM2OTk0Njg0", "organizations_url": "https://api.github.com/users/LZHgrla/orgs", "received_events_url": "https://api.github.com/users/LZHgrla/received_events", "repos_url": "https://api.github.com/users/LZHgrla/repos", "site_admin": false, "starred_url": "https://api.github.com/users/LZHgrla/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LZHgrla/subscriptions", "type": "User", "url": "https://api.github.com/users/LZHgrla", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Thanks for working on this! The [issue](https://bugs.python.org/issue24658) with pickling objects larger than 4GB seems to be patched in Python 3.8 (the minimal supported version was 3.6 at the time of implementing this), so a simple solution would be removing the `Table.__setstate__` and `Table.__getstate__` overrides.", "@mariosasko \r\nCool!\r\nI removed these overrides, and it worked.\r\n\r\nAll modifications are committed. Ready for review!", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005251 / 0.011353 (-0.006102) | 0.003804 / 0.011008 (-0.007204) | 0.063143 / 0.038508 (0.024635) | 0.059409 / 0.023109 (0.036300) | 0.255319 / 0.275898 (-0.020579) | 0.279194 / 0.323480 (-0.044285) | 0.004643 / 0.007986 (-0.003343) | 0.002560 / 0.004328 (-0.001768) | 0.047490 / 0.004250 (0.043240) | 0.039034 / 0.037052 (0.001982) | 0.257352 / 0.258489 (-0.001137) | 0.293029 / 0.293841 (-0.000812) | 0.027548 / 0.128546 (-0.100998) | 0.011307 / 0.075646 (-0.064339) | 0.210325 / 0.419271 (-0.208946) | 0.035161 / 0.043533 (-0.008372) | 0.253491 / 0.255139 (-0.001648) | 0.272085 / 0.283200 (-0.011115) | 0.018924 / 0.141683 (-0.122759) | 1.111148 / 1.452155 (-0.341007) | 1.178076 / 1.492716 (-0.314641) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092447 / 0.018006 (0.074441) | 0.303680 / 0.000490 (0.303190) | 0.000208 / 0.000200 (0.000008) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019087 / 0.037411 (-0.018325) | 0.062663 / 0.014526 (0.048137) | 0.074651 / 0.176557 (-0.101905) | 0.121334 / 0.737135 (-0.615802) | 0.076703 / 0.296338 (-0.219636) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286505 / 0.215209 (0.071295) | 2.804942 / 2.077655 (0.727287) | 1.481930 / 1.504120 (-0.022190) | 1.369485 / 1.541195 (-0.171710) | 1.424467 / 1.468490 (-0.044023) | 0.556810 / 4.584777 (-4.027967) | 2.416338 / 3.745712 (-1.329374) | 2.901869 / 5.269862 (-2.367992) | 1.827007 / 4.565676 (-2.738669) | 0.062252 / 0.424275 (-0.362024) | 0.005076 / 0.007607 (-0.002531) | 0.343850 / 0.226044 (0.117805) | 3.377611 / 2.268929 (1.108683) | 1.860214 / 55.444624 (-53.584410) | 1.595146 / 6.876477 (-5.281331) | 1.627234 / 2.142072 (-0.514838) | 0.651027 / 4.805227 (-4.154200) | 0.119214 / 6.500664 (-6.381450) | 0.043342 / 0.075469 (-0.032127) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.942863 / 1.841788 (-0.898924) | 12.484633 / 8.074308 (4.410324) | 10.560668 / 10.191392 (0.369276) | 0.144647 / 0.680424 (-0.535777) | 0.014734 / 0.534201 (-0.519466) | 0.286575 / 0.579283 (-0.292708) | 0.270913 / 0.434364 (-0.163451) | 0.323792 / 0.540337 (-0.216545) | 0.419186 / 1.386936 (-0.967750) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005315 / 0.011353 (-0.006038) | 0.003548 / 0.011008 (-0.007460) | 0.049271 / 0.038508 (0.010763) | 0.055198 / 0.023109 (0.032089) | 0.275940 / 0.275898 (0.000042) | 0.307637 / 0.323480 (-0.015843) | 0.003997 / 0.007986 (-0.003988) | 0.002544 / 0.004328 (-0.001785) | 0.050381 / 0.004250 (0.046130) | 0.041158 / 0.037052 (0.004105) | 0.281519 / 0.258489 (0.023030) | 0.308085 / 0.293841 (0.014244) | 0.030464 / 0.128546 (-0.098083) | 0.010690 / 0.075646 (-0.064957) | 0.057458 / 0.419271 (-0.361814) | 0.032814 / 0.043533 (-0.010719) | 0.282435 / 0.255139 (0.027296) | 0.301342 / 0.283200 (0.018142) | 0.017556 / 0.141683 (-0.124127) | 1.159423 / 1.452155 (-0.292732) | 1.177344 / 1.492716 (-0.315372) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091086 / 0.018006 (0.073079) | 0.305316 / 0.000490 (0.304826) | 0.000218 / 0.000200 (0.000019) | 0.000054 / 0.000054 (-0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021832 / 0.037411 (-0.015579) | 0.071055 / 0.014526 (0.056529) | 0.082982 / 0.176557 (-0.093574) | 0.119966 / 0.737135 (-0.617169) | 0.083539 / 0.296338 (-0.212800) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.302501 / 0.215209 (0.087292) | 2.936347 / 2.077655 (0.858692) | 1.601658 / 1.504120 (0.097538) | 1.467267 / 1.541195 (-0.073928) | 1.514656 / 1.468490 (0.046166) | 0.563934 / 4.584777 (-4.020843) | 2.513715 / 3.745712 (-1.231997) | 2.813014 / 5.269862 (-2.456847) | 1.773243 / 4.565676 (-2.792433) | 0.063208 / 0.424275 (-0.361067) | 0.004979 / 0.007607 (-0.002628) | 0.360694 / 0.226044 (0.134650) | 3.520578 / 2.268929 (1.251650) | 1.975369 / 55.444624 (-53.469255) | 1.691257 / 6.876477 (-5.185220) | 1.730872 / 2.142072 (-0.411200) | 0.655366 / 4.805227 (-4.149861) | 0.146043 / 6.500664 (-6.354621) | 0.041386 / 0.075469 (-0.034083) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.979840 / 1.841788 (-0.861948) | 12.456924 / 8.074308 (4.382616) | 10.938595 / 10.191392 (0.747203) | 0.133853 / 0.680424 (-0.546571) | 0.015744 / 0.534201 (-0.518457) | 0.289585 / 0.579283 (-0.289698) | 0.291143 / 0.434364 (-0.143221) | 0.328109 / 0.540337 (-0.212228) | 0.561897 / 1.386936 (-0.825039) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#05ec66cc1abc20bd13d02c681b7be372ae084a4f \"CML watermark\")\n" ]
2023-11-22T17:55:10Z
2023-11-23T15:19:43Z
2023-11-23T15:13:28Z
CONTRIBUTOR
null
null
null
When using distributed training, the code of `os.remove(filename)` may be executed separately by each rank, leading to `FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmprxxxxxxx.arrow'` ```python from torch import distributed as dist if dist.get_rank() == 0: dataset = process_dataset(*args, **kwargs) objects = [dataset] else: objects = [None] dist.broadcast_object_list(objects, src=0) dataset = objects[0] ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6444/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6444/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6444.diff", "html_url": "https://github.com/huggingface/datasets/pull/6444", "merged_at": "2023-11-23T15:13:28Z", "patch_url": "https://github.com/huggingface/datasets/pull/6444.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6444" }
https://api.github.com/repos/huggingface/datasets/issues/5998
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5998/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5998/comments
https://api.github.com/repos/huggingface/datasets/issues/5998/events
https://github.com/huggingface/datasets/issues/5998
1,781,805,018
I_kwDODunzps5qNC_a
5,998
The current implementation has a potential bug in the sort method
{ "avatar_url": "https://avatars.githubusercontent.com/u/22192665?v=4", "events_url": "https://api.github.com/users/wangyuxinwhy/events{/privacy}", "followers_url": "https://api.github.com/users/wangyuxinwhy/followers", "following_url": "https://api.github.com/users/wangyuxinwhy/following{/other_user}", "gists_url": "https://api.github.com/users/wangyuxinwhy/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/wangyuxinwhy", "id": 22192665, "login": "wangyuxinwhy", "node_id": "MDQ6VXNlcjIyMTkyNjY1", "organizations_url": "https://api.github.com/users/wangyuxinwhy/orgs", "received_events_url": "https://api.github.com/users/wangyuxinwhy/received_events", "repos_url": "https://api.github.com/users/wangyuxinwhy/repos", "site_admin": false, "starred_url": "https://api.github.com/users/wangyuxinwhy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wangyuxinwhy/subscriptions", "type": "User", "url": "https://api.github.com/users/wangyuxinwhy", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Thanks for reporting, @wangyuxinwhy. " ]
2023-06-30T03:16:57Z
2023-06-30T14:21:03Z
2023-06-30T14:11:25Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug In the sort method,here's a piece of code ```python # column_names: Union[str, Sequence_[str]] # Check proper format of and for duplicates in column_names if not isinstance(column_names, list): column_names = [column_names] ``` I get an error when I pass in a tuple based on the column_names type annotation, it will raise an errror.As in the example below, while the type annotation implies that a tuple can be passed. ```python from datasets import load_dataset dataset = load_dataset('glue', 'ax')['test'] dataset.sort(column_names=('premise', 'hypothesis')) # Raise ValueError: Column '('premise', 'hypothesis')' not found in the dataset. ``` Of course, after I modified the tuple into a list, everything worked fine Change the code to the following so there will be no problem ```python # Check proper format of and for duplicates in column_names if not isinstance(column_names, list): if isinstance(column_names, str): column_names = [column_names] else: column_names = list(column_names) ``` ### Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('glue', 'ax')['test'] dataset.sort(column_names=('premise', 'hypothesis')) # Raise ValueError: Column '('premise', 'hypothesis')' not found in the dataset. ``` ### Expected behavior Passing tuple into column_names should be equivalent to passing list ### Environment info - `datasets` version: 2.13.0 - Platform: macOS-13.1-arm64-arm-64bit - Python version: 3.10.11 - Huggingface_hub version: 0.15.1 - PyArrow version: 12.0.1 - Pandas version: 2.0.2
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5998/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5998/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/6259
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6259/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6259/comments
https://api.github.com/repos/huggingface/datasets/issues/6259/events
https://github.com/huggingface/datasets/issues/6259
1,911,965,758
I_kwDODunzps5x9kg-
6,259
Duplicated Rows When Loading Parquet Files from Root Directory with Subdirectories
{ "avatar_url": "https://avatars.githubusercontent.com/u/141304309?v=4", "events_url": "https://api.github.com/users/MF-FOOM/events{/privacy}", "followers_url": "https://api.github.com/users/MF-FOOM/followers", "following_url": "https://api.github.com/users/MF-FOOM/following{/other_user}", "gists_url": "https://api.github.com/users/MF-FOOM/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/MF-FOOM", "id": 141304309, "login": "MF-FOOM", "node_id": "U_kgDOCGwh9Q", "organizations_url": "https://api.github.com/users/MF-FOOM/orgs", "received_events_url": "https://api.github.com/users/MF-FOOM/received_events", "repos_url": "https://api.github.com/users/MF-FOOM/repos", "site_admin": false, "starred_url": "https://api.github.com/users/MF-FOOM/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MF-FOOM/subscriptions", "type": "User", "url": "https://api.github.com/users/MF-FOOM", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" } ]
null
[ "Thanks for reporting this issue! We should be able to avoid this by making our `glob` patterns more precise. In the meantime, you can load the dataset by directly assigning splits to the data files: \r\n```python\r\nfrom datasets import load_dataset\r\nds = load_dataset(\"parquet\", data_files={\"train\": \"testing123/train/output_train.parquet\", \"validation\": \"testing123/val/output_val.parquet\"})\r\n```" ]
2023-09-25T17:20:54Z
2024-03-15T15:22:04Z
2024-03-15T15:22:04Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug When parquet files are saved in "train" and "val" subdirectories under a root directory, and datasets are then loaded using `load_dataset("parquet", data_dir="root_directory")`, the resulting dataset has duplicated rows for both the training and validation sets. ### Steps to reproduce the bug 1. Create a root directory, e.g., "testing123". 2. Under "testing123", create two subdirectories: "train" and "val". 3. Create and save a parquet file with 3 unique rows in the "train" subdirectory. 4. Create and save a parquet file with 4 unique rows in the "val" subdirectory. 5. Load the datasets from the root directory using `load_dataset("parquet", data_dir="testing123")` 6. Iterate through the datasets and print the rows Here's a collab reproducing these steps: https://colab.research.google.com/drive/11NEdImnQ3OqJlwKSHRMhr7jCBesNdLY4?usp=sharing ### Expected behavior - Training set should contain 3 unique rows. - Validation set should contain 4 unique rows. ### Environment info - `datasets` version: 2.14.5 - Platform: Linux-5.15.120+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.17.2 - PyArrow version: 9.0.0 - Pandas version: 1.5.3
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6259/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6259/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/5645
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5645/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5645/comments
https://api.github.com/repos/huggingface/datasets/issues/5645/events
https://github.com/huggingface/datasets/issues/5645
1,627,108,278
I_kwDODunzps5g-7O2
5,645
Datasets map and select(range()) is giving dill error
{ "avatar_url": "https://avatars.githubusercontent.com/u/90728105?v=4", "events_url": "https://api.github.com/users/Tanya-11/events{/privacy}", "followers_url": "https://api.github.com/users/Tanya-11/followers", "following_url": "https://api.github.com/users/Tanya-11/following{/other_user}", "gists_url": "https://api.github.com/users/Tanya-11/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Tanya-11", "id": 90728105, "login": "Tanya-11", "node_id": "MDQ6VXNlcjkwNzI4MTA1", "organizations_url": "https://api.github.com/users/Tanya-11/orgs", "received_events_url": "https://api.github.com/users/Tanya-11/received_events", "repos_url": "https://api.github.com/users/Tanya-11/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Tanya-11/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Tanya-11/subscriptions", "type": "User", "url": "https://api.github.com/users/Tanya-11", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "It looks like an error that we observed once in https://github.com/huggingface/datasets/pull/5166\r\n\r\nCan you try to update `datasets` ?\r\n\r\n```\r\npip install -U datasets\r\n```\r\n\r\nif it doesn't work, can you make sure you don't have packages installed that may modify `dill`'s behavior, such as `apache-beam` ?", "@lhoestq That fixed the problem, Thanks :)" ]
2023-03-16T10:01:28Z
2023-03-17T04:24:51Z
2023-03-17T04:24:51Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug I'm using Huggingface Datasets library to load the dataset in google colab When I do, > data = train_dataset.select(range(10)) or > train_datasets = train_dataset.map( > process_data_to_model_inputs, > batched=True, > batch_size=batch_size, > remove_columns=["article", "abstract"], > ) I get following error: `module 'dill._dill' has no attribute 'log'` I've tried downgrading the dill version from latest to 0.2.8, but no luck. Stack trace: > --------------------------------------------------------------------------- > ModuleNotFoundError Traceback (most recent call last) > /usr/local/lib/python3.9/dist-packages/datasets/utils/py_utils.py in _no_cache_fields(obj) > 367 try: > --> 368 import transformers as tr > 369 > > ModuleNotFoundError: No module named 'transformers' > > During handling of the above exception, another exception occurred: > > AttributeError Traceback (most recent call last) > 17 frames > <ipython-input-13-dd14813880a6> in <module> > ----> 1 test = train_dataset.select(range(10)) > > /usr/local/lib/python3.9/dist-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs) > 155 } > 156 # apply actual function > --> 157 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) > 158 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] > 159 # re-apply format to the output > > /usr/local/lib/python3.9/dist-packages/datasets/fingerprint.py in wrapper(*args, **kwargs) > 155 if kwargs.get(fingerprint_name) is None: > 156 kwargs_for_fingerprint["fingerprint_name"] = fingerprint_name > --> 157 kwargs[fingerprint_name] = update_fingerprint( > 158 self._fingerprint, transform, kwargs_for_fingerprint > 159 ) > > /usr/local/lib/python3.9/dist-packages/datasets/fingerprint.py in update_fingerprint(fingerprint, transform, transform_args) > 103 for key in sorted(transform_args): > 104 hasher.update(key) > --> 105 hasher.update(transform_args[key]) > 106 return hasher.hexdigest() > 107 > > /usr/local/lib/python3.9/dist-packages/datasets/fingerprint.py in update(self, value) > 55 def update(self, value): > 56 self.m.update(f"=={type(value)}==".encode("utf8")) > ---> 57 self.m.update(self.hash(value).encode("utf-8")) > 58 > 59 def hexdigest(self): > > /usr/local/lib/python3.9/dist-packages/datasets/fingerprint.py in hash(cls, value) > 51 return cls.dispatch[type(value)](cls, value) > 52 else: > ---> 53 return cls.hash_default(value) > 54 > 55 def update(self, value): > > /usr/local/lib/python3.9/dist-packages/datasets/fingerprint.py in hash_default(cls, value) > 44 @classmethod > 45 def hash_default(cls, value): > ---> 46 return cls.hash_bytes(dumps(value)) > 47 > 48 @classmethod > > /usr/local/lib/python3.9/dist-packages/datasets/utils/py_utils.py in dumps(obj) > 387 file = StringIO() > 388 with _no_cache_fields(obj): > --> 389 dump(obj, file) > 390 return file.getvalue() > 391 > > /usr/local/lib/python3.9/dist-packages/datasets/utils/py_utils.py in dump(obj, file) > 359 def dump(obj, file): > 360 """pickle an object to a file""" > --> 361 Pickler(file, recurse=True).dump(obj) > 362 return > 363 > > /usr/local/lib/python3.9/dist-packages/dill/_dill.py in dump(self, obj) > 392 return > 393 > --> 394 def load_session(filename='/tmp/session.pkl', main=None): > 395 """update the __main__ module with the state from the session file""" > 396 if main is None: main = _main_module > > /usr/lib/python3.9/pickle.py in dump(self, obj) > 485 if self.proto >= 4: > 486 self.framer.start_framing() > --> 487 self.save(obj) > 488 self.write(STOP) > 489 self.framer.end_framing() > > /usr/local/lib/python3.9/dist-packages/dill/_dill.py in save(self, obj, save_persistent_id) > 386 pickler._byref = False # disable pickling by name reference > 387 pickler._recurse = False # disable pickling recursion for globals > --> 388 pickler._session = True # is best indicator of when pickling a session > 389 pickler.dump(main) > 390 finally: > > /usr/lib/python3.9/pickle.py in save(self, obj, save_persistent_id) > 558 f = self.dispatch.get(t) > 559 if f is not None: > --> 560 f(self, obj) # Call unbound method with explicit self > 561 return > 562 > > /usr/local/lib/python3.9/dist-packages/dill/_dill.py in save_singleton(pickler, obj) > > /usr/lib/python3.9/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, state_setter, obj) > 689 write(NEWOBJ) > 690 else: > --> 691 save(func) > 692 save(args) > 693 write(REDUCE) > > /usr/local/lib/python3.9/dist-packages/dill/_dill.py in save(self, obj, save_persistent_id) > 386 pickler._byref = False # disable pickling by name reference > 387 pickler._recurse = False # disable pickling recursion for globals > --> 388 pickler._session = True # is best indicator of when pickling a session > 389 pickler.dump(main) > 390 finally: > > /usr/lib/python3.9/pickle.py in save(self, obj, save_persistent_id) > 558 f = self.dispatch.get(t) > 559 if f is not None: > --> 560 f(self, obj) # Call unbound method with explicit self > 561 return > 562 > > /usr/local/lib/python3.9/dist-packages/datasets/utils/py_utils.py in save_function(pickler, obj) > 583 dill._dill.log.info("# F1") > 584 else: > --> 585 dill._dill.log.info("F2: %s" % obj) > 586 name = getattr(obj, "__qualname__", getattr(obj, "__name__", None)) > 587 dill._dill.StockPickler.save_global(pickler, obj, name=name) > > AttributeError: module 'dill._dill' has no attribute 'log' ### Steps to reproduce the bug After loading the dataset(eg: https://huggingface.co/datasets/scientific_papers) in google colab do either > data = train_dataset.select(range(10)) or > train_datasets = train_dataset.map( > process_data_to_model_inputs, > batched=True, > batch_size=batch_size, > remove_columns=["article", "abstract"], > ) ### Expected behavior The map and select function should work ### Environment info dataset: https://huggingface.co/datasets/scientific_papers dill = 0.3.6 python= 3.9.16 transformer = 4.2.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/90728105?v=4", "events_url": "https://api.github.com/users/Tanya-11/events{/privacy}", "followers_url": "https://api.github.com/users/Tanya-11/followers", "following_url": "https://api.github.com/users/Tanya-11/following{/other_user}", "gists_url": "https://api.github.com/users/Tanya-11/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Tanya-11", "id": 90728105, "login": "Tanya-11", "node_id": "MDQ6VXNlcjkwNzI4MTA1", "organizations_url": "https://api.github.com/users/Tanya-11/orgs", "received_events_url": "https://api.github.com/users/Tanya-11/received_events", "repos_url": "https://api.github.com/users/Tanya-11/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Tanya-11/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Tanya-11/subscriptions", "type": "User", "url": "https://api.github.com/users/Tanya-11", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5645/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5645/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/5339
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5339/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5339/comments
https://api.github.com/repos/huggingface/datasets/issues/5339/events
https://github.com/huggingface/datasets/pull/5339
1,482,817,424
PR_kwDODunzps5EsC8N
5,339
Add Video feature, videofolder, and video-classification task
{ "avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4", "events_url": "https://api.github.com/users/nateraw/events{/privacy}", "followers_url": "https://api.github.com/users/nateraw/followers", "following_url": "https://api.github.com/users/nateraw/following{/other_user}", "gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nateraw", "id": 32437151, "login": "nateraw", "node_id": "MDQ6VXNlcjMyNDM3MTUx", "organizations_url": "https://api.github.com/users/nateraw/orgs", "received_events_url": "https://api.github.com/users/nateraw/received_events", "repos_url": "https://api.github.com/users/nateraw/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nateraw/subscriptions", "type": "User", "url": "https://api.github.com/users/nateraw", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5339). All of your documentation changes will be reflected on that endpoint.", "@lhoestq I think I need some serious help with the tests πŸ˜…...I started this locally but it got too time consuming.\n\nOne issue I remember running into is with lossless audio encoding/decoding. I started thinking of using the underlying Audio feature instead of PyAV so I didn't have to rewrite similar logic here...but assumed that would turn into a mess w/ underlying logic", "Are you still planning to work on this?", "I'm closing this PR. Feel free to reopen it if necessary." ]
2022-12-07T20:48:34Z
2024-01-11T06:30:24Z
2023-10-11T09:13:11Z
CONTRIBUTOR
null
null
null
This PR does the following: - Adds `Video` feature (Resolves #5225 ) - Adds `video-classification` task - Adds `videofolder` packaged module for easy loading of local video classification datasets TODO: - [ ] add tests - [ ] add docs
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5339/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5339/timeline
null
null
1
{ "diff_url": "https://github.com/huggingface/datasets/pull/5339.diff", "html_url": "https://github.com/huggingface/datasets/pull/5339", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5339.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5339" }
https://api.github.com/repos/huggingface/datasets/issues/5005
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5005/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5005/comments
https://api.github.com/repos/huggingface/datasets/issues/5005/events
https://github.com/huggingface/datasets/issues/5005
1,380,952,960
I_kwDODunzps5ST6uA
5,005
Release 2.5.0 breaks transformers CI
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" } ]
null
[ "Shall we revert https://github.com/huggingface/datasets/pull/4971 @mariosasko ?\r\n\r\nAnd for consistency we can update IterableDataset.map later" ]
2022-09-21T13:39:19Z
2022-09-21T14:11:57Z
2022-09-21T14:11:57Z
MEMBER
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
## Describe the bug As reported by @lhoestq: > see https://app.circleci.com/pipelines/github/huggingface/transformers/47634/workflows/b491886b-e66e-4edb-af96-8b459e72aa25/jobs/564563 this is used here: [https://github.com/huggingface/transformers/blob/3b19c0317b6909e2d7f11b5053895ac55[…]torch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py](https://github.com/huggingface/transformers/blob/3b19c0317b6909e2d7f11b5053895ac55250e7da/examples/pytorch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py#L482-L488)
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5005/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5005/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/7492
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7492/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7492/comments
https://api.github.com/repos/huggingface/datasets/issues/7492/events
https://github.com/huggingface/datasets/pull/7492
2,959,088,568
PR_kwDODunzps6QtCQM
7,492
Closes #7457
{ "avatar_url": "https://avatars.githubusercontent.com/u/129883215?v=4", "events_url": "https://api.github.com/users/Harry-Yang0518/events{/privacy}", "followers_url": "https://api.github.com/users/Harry-Yang0518/followers", "following_url": "https://api.github.com/users/Harry-Yang0518/following{/other_user}", "gists_url": "https://api.github.com/users/Harry-Yang0518/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Harry-Yang0518", "id": 129883215, "login": "Harry-Yang0518", "node_id": "U_kgDOB73cTw", "organizations_url": "https://api.github.com/users/Harry-Yang0518/orgs", "received_events_url": "https://api.github.com/users/Harry-Yang0518/received_events", "repos_url": "https://api.github.com/users/Harry-Yang0518/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Harry-Yang0518/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Harry-Yang0518/subscriptions", "type": "User", "url": "https://api.github.com/users/Harry-Yang0518", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "This PR fixes issue #7457" ]
2025-03-30T20:41:20Z
2025-04-13T22:05:07Z
2025-04-13T22:05:07Z
NONE
null
null
null
This PR updates the documentation to include the HF_DATASETS_CACHE environment variable, which allows users to customize the cache location for datasetsβ€”similar to HF_HUB_CACHE for models.
{ "avatar_url": "https://avatars.githubusercontent.com/u/129883215?v=4", "events_url": "https://api.github.com/users/Harry-Yang0518/events{/privacy}", "followers_url": "https://api.github.com/users/Harry-Yang0518/followers", "following_url": "https://api.github.com/users/Harry-Yang0518/following{/other_user}", "gists_url": "https://api.github.com/users/Harry-Yang0518/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Harry-Yang0518", "id": 129883215, "login": "Harry-Yang0518", "node_id": "U_kgDOB73cTw", "organizations_url": "https://api.github.com/users/Harry-Yang0518/orgs", "received_events_url": "https://api.github.com/users/Harry-Yang0518/received_events", "repos_url": "https://api.github.com/users/Harry-Yang0518/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Harry-Yang0518/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Harry-Yang0518/subscriptions", "type": "User", "url": "https://api.github.com/users/Harry-Yang0518", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7492/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7492/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7492.diff", "html_url": "https://github.com/huggingface/datasets/pull/7492", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7492.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7492" }
https://api.github.com/repos/huggingface/datasets/issues/7328
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7328/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7328/comments
https://api.github.com/repos/huggingface/datasets/issues/7328/events
https://github.com/huggingface/datasets/pull/7328
2,738,626,593
PR_kwDODunzps6FKK13
7,328
Fix typo in arrow_dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/5564745?v=4", "events_url": "https://api.github.com/users/AndreaFrancis/events{/privacy}", "followers_url": "https://api.github.com/users/AndreaFrancis/followers", "following_url": "https://api.github.com/users/AndreaFrancis/following{/other_user}", "gists_url": "https://api.github.com/users/AndreaFrancis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AndreaFrancis", "id": 5564745, "login": "AndreaFrancis", "node_id": "MDQ6VXNlcjU1NjQ3NDU=", "organizations_url": "https://api.github.com/users/AndreaFrancis/orgs", "received_events_url": "https://api.github.com/users/AndreaFrancis/received_events", "repos_url": "https://api.github.com/users/AndreaFrancis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AndreaFrancis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AndreaFrancis/subscriptions", "type": "User", "url": "https://api.github.com/users/AndreaFrancis", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7328). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-12-13T15:17:09Z
2024-12-19T17:10:27Z
2024-12-19T17:10:25Z
CONTRIBUTOR
null
null
null
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/5564745?v=4", "events_url": "https://api.github.com/users/AndreaFrancis/events{/privacy}", "followers_url": "https://api.github.com/users/AndreaFrancis/followers", "following_url": "https://api.github.com/users/AndreaFrancis/following{/other_user}", "gists_url": "https://api.github.com/users/AndreaFrancis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AndreaFrancis", "id": 5564745, "login": "AndreaFrancis", "node_id": "MDQ6VXNlcjU1NjQ3NDU=", "organizations_url": "https://api.github.com/users/AndreaFrancis/orgs", "received_events_url": "https://api.github.com/users/AndreaFrancis/received_events", "repos_url": "https://api.github.com/users/AndreaFrancis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AndreaFrancis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AndreaFrancis/subscriptions", "type": "User", "url": "https://api.github.com/users/AndreaFrancis", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7328/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7328/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7328.diff", "html_url": "https://github.com/huggingface/datasets/pull/7328", "merged_at": "2024-12-19T17:10:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/7328.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7328" }
https://api.github.com/repos/huggingface/datasets/issues/4866
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4866/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4866/comments
https://api.github.com/repos/huggingface/datasets/issues/4866/events
https://github.com/huggingface/datasets/pull/4866
1,344,809,132
PR_kwDODunzps49e1CP
4,866
amend docstring for dunder
{ "avatar_url": "https://avatars.githubusercontent.com/u/37704298?v=4", "events_url": "https://api.github.com/users/schafsam/events{/privacy}", "followers_url": "https://api.github.com/users/schafsam/followers", "following_url": "https://api.github.com/users/schafsam/following{/other_user}", "gists_url": "https://api.github.com/users/schafsam/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/schafsam", "id": 37704298, "login": "schafsam", "node_id": "MDQ6VXNlcjM3NzA0Mjk4", "organizations_url": "https://api.github.com/users/schafsam/orgs", "received_events_url": "https://api.github.com/users/schafsam/received_events", "repos_url": "https://api.github.com/users/schafsam/repos", "site_admin": false, "starred_url": "https://api.github.com/users/schafsam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/schafsam/subscriptions", "type": "User", "url": "https://api.github.com/users/schafsam", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4866). All of your documentation changes will be reflected on that endpoint." ]
2022-08-19T19:09:15Z
2022-09-09T16:33:11Z
null
NONE
null
null
null
display dunder method in docsting with underlines an not bold markdown.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4866/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4866/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4866.diff", "html_url": "https://github.com/huggingface/datasets/pull/4866", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4866.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4866" }
https://api.github.com/repos/huggingface/datasets/issues/6843
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6843/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6843/comments
https://api.github.com/repos/huggingface/datasets/issues/6843/events
https://github.com/huggingface/datasets/issues/6843
2,265,432,897
I_kwDODunzps6HB8NB
6,843
IterableDataset raises exception instead of retrying
{ "avatar_url": "https://avatars.githubusercontent.com/u/145220868?v=4", "events_url": "https://api.github.com/users/bauwenst/events{/privacy}", "followers_url": "https://api.github.com/users/bauwenst/followers", "following_url": "https://api.github.com/users/bauwenst/following{/other_user}", "gists_url": "https://api.github.com/users/bauwenst/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bauwenst", "id": 145220868, "login": "bauwenst", "node_id": "U_kgDOCKflBA", "organizations_url": "https://api.github.com/users/bauwenst/orgs", "received_events_url": "https://api.github.com/users/bauwenst/received_events", "repos_url": "https://api.github.com/users/bauwenst/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bauwenst/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bauwenst/subscriptions", "type": "User", "url": "https://api.github.com/users/bauwenst", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "Thanks for reporting! I've opened a PR with a fix.", "Thanks, @mariosasko! Related question (although I guess this is a feature request): could we have some kind of exponential back-off for these retries? Here's my reasoning:\r\n- If a one-time accidental error happens, you should retry immediately and will succeed immediately.\r\n- If the Hub has a small outage on the order of minutes, you don't want to retry on the order of hours. \r\n- If the Hub has a prologned outage of several hours, we don't want to keep retrying on the order of minutes.\r\n\r\nThere actually already exists an implementation for (clipped) exponential backoff in the HuggingFace suite ([here](https://github.com/huggingface/huggingface_hub/blob/61b156a4f2e5fe1a492ed8712b26803e2122bde0/src/huggingface_hub/utils/_http.py#L306)), but I don't think it is used here.\r\n\r\nThe requirements are basically that you have an initial minimum waiting time and a maximum waiting time, and with each retry, the waiting time is doubled. We don't want to overload your servers with needless retries, especially when they're down :sweat_smile:", "Oh, I've just remembered that we added retries to the `HfFileSystem` in `huggingface_hub` 0.21.0 (see [this](https://github.com/huggingface/huggingface_hub/blob/61b156a4f2e5fe1a492ed8712b26803e2122bde0/src/huggingface_hub/hf_file_system.py#L703)), so I'll close the linked PR as we don't want to retry the retries :).\r\n\r\nI agree with the exponential backoff suggestion, so I'll open another PR.", "@mariosasko The call you linked indeed points to the implementation I linked in my previous comment, yes, but it has no configurability. Arguably, you want to have this hidden backoff under the hood that catches small network disturbances on the time scale of seconds -- perhaps even with hardcoded limits as is the case currently -- but you also still want to have a separate backoff on top of that with the configurability as suggested by @lhoestq in [the comment I linked](https://github.com/huggingface/datasets/issues/6172#issuecomment-1794876229).\r\n\r\nMy particular use-case is that I'm streaming a dataset while training on a university cluster with a very long scheduling queue. This means that when the backoff runs out of retries (which happens in under 30 seconds with the call you linked), I lose my spot on the cluster and have to queue for a whole day or more. Ideally, I should be able to specify that I want to retry for 2 to 3 hours but with more and more time between requests, so that I can smooth over hours-long outages without a setback of days.", "I also have my runs crash a surprising amount due to the dataloader crashing because of the hub, some way to address this would be nice.", "@mariosasko The implementation for retries is still broken and there is still no exponential back-off.\r\n\r\nHuggingFace has a two-tiered back-off:\r\n- `huggingface_hub.utils` provides the low-level `http_backoff` function which is used for all HTTP requests. It retries first with 1 second delay, then 2, then 4, then 8, then 8, and then it crashes. This is not even half a minute of exponential backoff in total.\r\n- `datasets.utils.file_utils` provides a function `_add_retries_to_file_obj_read_method` that monkey-patches the `read` method of an `HfFileSystemFile` to have constant-time backoff on certain exceptions. The amount of retries and seconds between retries is customisable as explained by @lhoestq [here](https://github.com/huggingface/datasets/issues/6172#issuecomment-1794876229). The implementation looks like this:\r\n\r\nhttps://github.com/huggingface/datasets/blob/65f6eb54aa0e8bb44cea35deea28e0e8fecc25b9/src/datasets/utils/file_utils.py#L822-L841\r\n\r\nThis **still does not catch the correct exceptions** and hence no backoff happens **at all** which means that as soon as the hub is out for more than half a minute, processes will already start failing. Here is a stack trace of an uncaught exception:\r\n\r\n```\r\n File \"/miniconda3/envs/draft/lib/python3.11/site-packages/datasets/iterable_dataset.py\", line 268, in __iter__\r\n for key, pa_table in self.generate_tables_fn(**gen_kwags):\r\n File \"/miniconda3/envs/draft/lib/python3.11/site-packages/datasets/packaged_modules/json/json.py\", line 123, in _generate_tables\r\n batch = f.read(self.config.chunksize)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/miniconda3/envs/draft/lib/python3.11/site-packages/datasets/utils/file_utils.py\", line 830, in read_with_retries\r\n out = read(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n File \"/miniconda3/envs/draft/lib/python3.11/site-packages/huggingface_hub/hf_file_system.py\", line 757, in read\r\n return super().read(length)\r\n ^^^^^^^^^^^^^^^^^^^^\r\n File \"/miniconda3/envs/draft/lib/python3.11/site-packages/fsspec/spec.py\", line 1856, in read\r\n out = self.cache._fetch(self.loc, self.loc + length)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/miniconda3/envs/draft/lib/python3.11/site-packages/fsspec/caching.py\", line 189, in _fetch\r\n self.cache = self.fetcher(start, end) # new block replaces old\r\n ^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/miniconda3/envs/draft/lib/python3.11/site-packages/huggingface_hub/hf_file_system.py\", line 713, in _fetch_range\r\n r = http_backoff(\r\n ^^^^^^^^^^^^^\r\n File \"/miniconda3/envs/draft/lib/python3.11/site-packages/huggingface_hub/utils/_http.py\", line 326, in http_backoff\r\n raise err\r\n File \"/miniconda3/envs/draft/lib/python3.11/site-packages/huggingface_hub/utils/_http.py\", line 307, in http_backoff\r\n response = session.request(method=method, url=url, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/miniconda3/envs/draft/lib/python3.11/site-packages/requests/sessions.py\", line 589, in request\r\n resp = self.send(prep, **send_kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/miniconda3/envs/draft/lib/python3.11/site-packages/requests/sessions.py\", line 703, in send\r\n r = adapter.send(request, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/miniconda3/envs/draft/lib/python3.11/site-packages/huggingface_hub/utils/_http.py\", line 93, in send\r\n return super().send(request, *args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/miniconda3/envs/draft/lib/python3.11/site-packages/requests/adapters.py\", line 713, in send\r\n raise ReadTimeout(e, request=request)\r\nrequests.exceptions.ReadTimeout: (ReadTimeoutError(\"HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)\"), '(Request ID: 3d145d98-e4fa-442f-bead-6be060e60d59)')\r\n```\r\n**requests.exceptions.ReadTimeout** is not caught and hence the code fails after **0 retries**.", "I merged a fix for this, thanks for reporting ! It will now retry on any `requests` Timeout error, including ReadTimeoutError: https://github.com/huggingface/datasets/pull/7256" ]
2024-04-26T10:00:43Z
2024-10-28T14:57:07Z
null
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug In light of the recent server outages, I decided to look into whether I could somehow wrap my IterableDataset streams to retry rather than error out immediately. To my surprise, `datasets` [already supports retries](https://github.com/huggingface/datasets/issues/6172#issuecomment-1794876229). Since a commit by @lhoestq [last week](https://github.com/huggingface/datasets/commit/a188022dc43a76a119d90c03832d51d6e4a94d91), that code lives here: https://github.com/huggingface/datasets/blob/fe2bea6a4b09b180bd23b88fe96dfd1a11191a4f/src/datasets/utils/file_utils.py#L1097C1-L1111C19 If GitHub code snippets still aren't working, here's a copy: ```python def read_with_retries(*args, **kwargs): disconnect_err = None for retry in range(1, max_retries + 1): try: out = read(*args, **kwargs) break except (ClientError, TimeoutError) as err: disconnect_err = err logger.warning( f"Got disconnected from remote data host. Retrying in {config.STREAMING_READ_RETRY_INTERVAL}sec [{retry}/{max_retries}]" ) time.sleep(config.STREAMING_READ_RETRY_INTERVAL) else: raise ConnectionError("Server Disconnected") from disconnect_err return out ``` With the latest outage, the end of my stack trace looked like this: ``` ... File "/miniconda3/envs/draft/lib/python3.11/site-packages/datasets/download/streaming_download_manager.py", line 342, in read_with_retries out = read(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/draft/lib/python3.11/gzip.py", line 301, in read return self._buffer.read(size) ^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/draft/lib/python3.11/_compression.py", line 68, in readinto data = self.read(len(byte_view)) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/draft/lib/python3.11/gzip.py", line 505, in read buf = self._fp.read(io.DEFAULT_BUFFER_SIZE) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/draft/lib/python3.11/gzip.py", line 88, in read return self.file.read(size) ^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/draft/lib/python3.11/site-packages/fsspec/spec.py", line 1856, in read out = self.cache._fetch(self.loc, self.loc + length) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/draft/lib/python3.11/site-packages/fsspec/caching.py", line 189, in _fetch self.cache = self.fetcher(start, end) # new block replaces old ^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/draft/lib/python3.11/site-packages/huggingface_hub/hf_file_system.py", line 626, in _fetch_range hf_raise_for_status(r) File "/miniconda3/envs/draft/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py", line 333, in hf_raise_for_status raise HfHubHTTPError(str(e), response=response) from e huggingface_hub.utils._errors.HfHubHTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/datasets/allenai/c4/resolve/1588ec454efa1a09f29cd18ddd04fe05fc8653a2/en/c4-train.00346-of-01024.json.gz ``` Indeed, the code for retries only catches `ClientError`s and `TimeoutError`s, and all other exceptions, *including HuggingFace's own custom HTTP error class*, **are not caught. Nothing is retried,** and instead the exception is propagated upwards immediately. ### Steps to reproduce the bug Not sure how you reproduce this. Maybe unplug your Ethernet cable while streaming a dataset; the issue is pretty clear from the stack trace. ### Expected behavior All HTTP errors while iterating a streamable dataset should cause retries. ### Environment info Output from `datasets-cli env`: - `datasets` version: 2.18.0 - Platform: Linux-4.18.0-513.24.1.el8_9.x86_64-x86_64-with-glibc2.28 - Python version: 3.11.7 - `huggingface_hub` version: 0.20.3 - PyArrow version: 15.0.0 - Pandas version: 2.2.0 - `fsspec` version: 2023.10.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6843/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6843/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/5812
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5812/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5812/comments
https://api.github.com/repos/huggingface/datasets/issues/5812/events
https://github.com/huggingface/datasets/issues/5812
1,691,798,169
I_kwDODunzps5k1sqZ
5,812
Cannot shuffle interleaved IterableDataset with "all_exhausted" stopping strategy
{ "avatar_url": "https://avatars.githubusercontent.com/u/15215732?v=4", "events_url": "https://api.github.com/users/offchan42/events{/privacy}", "followers_url": "https://api.github.com/users/offchan42/followers", "following_url": "https://api.github.com/users/offchan42/following{/other_user}", "gists_url": "https://api.github.com/users/offchan42/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/offchan42", "id": 15215732, "login": "offchan42", "node_id": "MDQ6VXNlcjE1MjE1NzMy", "organizations_url": "https://api.github.com/users/offchan42/orgs", "received_events_url": "https://api.github.com/users/offchan42/received_events", "repos_url": "https://api.github.com/users/offchan42/repos", "site_admin": false, "starred_url": "https://api.github.com/users/offchan42/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/offchan42/subscriptions", "type": "User", "url": "https://api.github.com/users/offchan42", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "fef2c0", "default": false, "description": "", "id": 3287858981, "name": "streaming", "node_id": "MDU6TGFiZWwzMjg3ODU4OTgx", "url": "https://api.github.com/repos/huggingface/datasets/labels/streaming" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" } ]
null
[]
2023-05-02T05:26:17Z
2023-05-04T14:24:51Z
2023-05-04T14:24:51Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug Shuffling interleaved `IterableDataset` with "all_exhausted" strategy yields non-exhaustive sampling. ### Steps to reproduce the bug ```py from datasets import IterableDataset, interleave_datasets def gen(bias, length): for i in range(length): yield dict(a=bias+i) seed = 42 probabilities = [0.2, 0.6, 0.2] d1 = IterableDataset.from_generator(lambda: gen(0, 3)) d2 = IterableDataset.from_generator(lambda: gen(10, 4)) d3 = IterableDataset.from_generator(lambda: gen(20, 3)) ds = interleave_datasets([d1, d2, d3], probabilities=probabilities, seed=seed, stopping_strategy='all_exhausted') ds = ds.shuffle(buffer_size=1000) for x in ds: print(x) ``` This code produces ``` {'a': 0} {'a': 22} {'a': 20} {'a': 21} {'a': 10} {'a': 1} ``` ### Expected behavior It should produce a longer list of examples to exhaust all the datasets. If you comment out the shuffle line, it will exhaust all the datasets properly. Here is the output if you comment out shuffling: ``` {'a': 10} {'a': 11} {'a': 20} {'a': 12} {'a': 0} {'a': 21} {'a': 13} {'a': 10} {'a': 1} {'a': 11} {'a': 12} {'a': 22} {'a': 13} {'a': 20} {'a': 10} {'a': 11} {'a': 12} {'a': 2} ``` ### Environment info - `datasets` version: 2.12.0 - Platform: Linux-5.10.147+-x86_64-with-glibc2.31 - Python version: 3.10.11 - Huggingface_hub version: 0.14.1 - PyArrow version: 9.0.0 - Pandas version: 1.5.3 This was run on Google Colab.
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5812/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5812/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/5196
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5196/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5196/comments
https://api.github.com/repos/huggingface/datasets/issues/5196/events
https://github.com/huggingface/datasets/pull/5196
1,434,401,646
PR_kwDODunzps5CH439
5,196
Use hfh hf_hub_url function
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5196). All of your documentation changes will be reflected on that endpoint.", "@lhoestq I think we should first agree if `datasets` can introduce the breaking change of ignoring `config.HUB_DATASETS_URL`: some users may have override this.\r\n\r\nIf so, I then would suggest to initiate a deprecation cycle.", "After a discussion with the rest of the datasets team, we agreed we can introduce the breaking change of ignoring `config.HUB_DATASETS_URL`: this will have minimal impact, only for **private Hubs**. We will address eventual possible impacts in the future.\r\n\r\nAdditionally, we also ignore `config.HUB_DEFAULT_VERSION`.\r\n\r\nSee explanation in this PR description: https://github.com/huggingface/datasets/pull/5196#issue-1434401646", "I'm trying to upgrade datasets to 2.7.0 in https://github.com/huggingface/datasets-server, and the tests fail due to this change. I think it's a breaking change (that was not listed in https://github.com/huggingface/datasets/releases/tag/2.7.0) since code that previously worked (by setting `datasets.config.HUB_DATASETS_URL = CI_HUB_DATASETS_URL` for example) does not work anymore.\r\n\r\nI'm not sure what is the correct way to set up the tests; besides setting the env var \"HF_ENDPOINT\" before launching the tests (which, I think, is not a good way to do: the tests should not depend on the environment).", "OK, I re-read this thread, and https://github.com/huggingface/datasets/pull/5196#issuecomment-1307430175 explicitely states that `config.HUB_DATASETS_URL` (as well as `config.HUB_DEFAULT_VERSION`) is now ignored. I was expecting the breaking changes to be listed in the release notes: https://github.com/huggingface/datasets/releases/tag/2.7.0.", "> I'm not sure what is the correct way to set up the tests; besides setting the env var \"HF_ENDPOINT\" before launching the tests (which, I think, is not a good way to do: the tests should not depend on the environment).\r\n\r\nI think the current workaround of settings an env variable before launching the tests is \"not so bad\" when considering the fact that env variables are evaluated at import time in `huggingface_hub` (and most probable `datasets` as well). I think that when refactoring this in huggingface_hub (https://github.com/huggingface/huggingface_hub/issues/1172) I'll opt for instantiating a `Settings` object (or `Constants`) that contains all the settings variables. This way it will not be possible to import attributes individually + tests would be easier. As I see it, it would be similar to [what `Pydantic` does](https://pydantic-docs.helpmanual.io/usage/settings/) even though we most probably don't want Pydantic as a root dependency just for that. ", "You can use fixtures in your tests:\r\n```python\r\nCI_HUB_ENDPOINT = \"https://hub-ci.huggingface.co\"\r\nCI_HUB_DATASETS_URL = CI_HUB_ENDPOINT + \"/datasets/{repo_id}/resolve/{revision}/{path}\"\r\nCI_HFH_HUGGINGFACE_CO_URL_TEMPLATE = CI_HUB_ENDPOINT + \"/{repo_id}/resolve/{revision}/{filename}\"\r\n\r\n@pytest.fixture\r\ndef ci_hfh_hf_hub_url(monkeypatch):\r\n monkeypatch.setattr(\r\n \"huggingface_hub.file_download.HUGGINGFACE_CO_URL_TEMPLATE\", CI_HFH_HUGGINGFACE_CO_URL_TEMPLATE\r\n )\r\n\r\n@pytest.fixture\r\ndef ci_hub_config(monkeypatch):\r\n monkeypatch.setattr(\"datasets.config.HF_ENDPOINT\", CI_HUB_ENDPOINT)\r\n monkeypatch.setattr(\"datasets.config.HUB_DATASETS_URL\", CI_HUB_DATASETS_URL)\r\n```\r\n\r\nand use `@pytest.fixture(autouse=True)` if you want to always use the CI endpoints.\r\n\r\nAnd when `huggingface-hub` and `datasets` change the way we can set the endpoint, we'll just need to update the fixtures.\r\nI think ultimately you'll only have to change the `huggingface-hub` endpoint settings\r\n", "OK.\r\n\r\nIn fact, in datasets-server we set `config.HUB_DATASETS_URL` (https://github.com/huggingface/datasets-server/blob/35a30dbcd687b26db1f02502ea8305f70c064473/workers/splits/src/splits/config.py#L26) at config time, before starting the workers. It's not an issue with how to launch the tests, but with the app in itself.\r\n\r\nI understand that for now, the only way to fix this is to setup `HF_ENDPOINT` in the env when launching the app (currently, we set the endpoint with `COMMON_HF_ENDPOINT`, a custom env var I set to be sure not to have side-effects)", "> You can use fixtures in your tests:\r\n\r\nThanks, used in https://github.com/huggingface/datasets-server/pull/644." ]
2022-11-03T10:08:09Z
2022-12-06T11:38:17Z
2022-11-09T07:15:12Z
MEMBER
null
null
null
Small refactoring to use `hf_hub_url` function from `huggingface_hub`. This PR also creates the `hub` module that will contain all `huggingface_hub` functionalities relevant to `datasets`. This is a necessary stage before implementing the use of the `hfh` caching system (which uses its `hf_hub_url` under the hood). EDIT: ~~Finally, we use our `config.HUB_DATASETS_URL` when using `hfh.hf_hub_url`~~ There is a breaking change: the `hfh` `hf_hub_url` function uses - `hfh` `HUGGINGFACE_CO_URL_TEMPLATE` URL template, different from the `datasets` `config.HUB_DATASETS_URL` - also, `hfh` `DEFAULT_REVISION`, instead of `datasets` `config.HUB_DEFAULT_VERSION`
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5196/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5196/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5196.diff", "html_url": "https://github.com/huggingface/datasets/pull/5196", "merged_at": "2022-11-09T07:15:12Z", "patch_url": "https://github.com/huggingface/datasets/pull/5196.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5196" }
https://api.github.com/repos/huggingface/datasets/issues/5230
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5230/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5230/comments
https://api.github.com/repos/huggingface/datasets/issues/5230/events
https://github.com/huggingface/datasets/issues/5230
1,445,507,580
I_kwDODunzps5WKLH8
5,230
dataclasses error when importing the library in python 3.11
{ "avatar_url": "https://avatars.githubusercontent.com/u/76044840?v=4", "events_url": "https://api.github.com/users/yonikremer/events{/privacy}", "followers_url": "https://api.github.com/users/yonikremer/followers", "following_url": "https://api.github.com/users/yonikremer/following{/other_user}", "gists_url": "https://api.github.com/users/yonikremer/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yonikremer", "id": 76044840, "login": "yonikremer", "node_id": "MDQ6VXNlcjc2MDQ0ODQw", "organizations_url": "https://api.github.com/users/yonikremer/orgs", "received_events_url": "https://api.github.com/users/yonikremer/received_events", "repos_url": "https://api.github.com/users/yonikremer/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yonikremer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yonikremer/subscriptions", "type": "User", "url": "https://api.github.com/users/yonikremer", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" } ]
null
[ "I opened [this issue](https://github.com/python/cpython/issues/99401).\r\nPython's maintainers say that the issue is caused by [this change](https://docs.python.org/3.11/whatsnew/3.11.html#dataclasses).\r\nI believe adding a `__hash__` method to `datasets.utils.version.Version` should solve (at least partially) this issue.", "Has this been fixed? I am running into this issue now. \r\n\r\nIf this has been fixed, could have a new release with this?\r\n", "Hi, I am getting error while trainingΒ \r\n\r\n(tensorflow) C:\\tensorflow\\models\\research\\object_detection>python train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/faster_rcnn_inception_v2_pets.config\r\nTraceback (most recent call last):\r\n File \"C:\\tensorflow\\models\\research\\object_detection\\train.py\", line 54, in <module>\r\n from object_detection.legacy import trainer\r\n File \"C:\\tensorflow\\models\\research\\object_detection\\legacy\\trainer.py\", line 27, in <module>\r\n from object_detection.builders import optimizer_builder\r\n File \"C:\\tensorflow\\models\\research\\object_detection\\builders\\optimizer_builder.py\", line 25, in <module>\r\n from official.modeling.optimization import ema_optimizer\r\n File \"C:\\tensorflow\\models\\official\\modeling\\optimization\\__init__.py\", line 19, in <module>\r\n from official.modeling.optimization.configs.optimization_config import *\r\n File \"C:\\tensorflow\\models\\official\\modeling\\optimization\\configs\\optimization_config.py\", line 31, in <module>\r\n @dataclasses.dataclass\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\x0133252\\AppData\\Local\\anaconda3\\envs\\tensorflow\\Lib\\dataclasses.py\", line 1223, in dataclass\r\n return wrap(cls)\r\n ^^^^^^^^^\r\n File \"C:\\Users\\x0133252\\AppData\\Local\\anaconda3\\envs\\tensorflow\\Lib\\dataclasses.py\", line 1213, in wrap\r\n return _process_class(cls, init, repr, eq, order, unsafe_hash,\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\x0133252\\AppData\\Local\\anaconda3\\envs\\tensorflow\\Lib\\dataclasses.py\", line 958, in _process_class\r\n cls_fields.append(_get_field(cls, name, type, kw_only))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\x0133252\\AppData\\Local\\anaconda3\\envs\\tensorflow\\Lib\\dataclasses.py\", line 815, in _get_field\r\n raise ValueError(f'mutable default {type(f.default)} for field '\r\nValueError: mutable default <class 'official.modeling.optimization.configs.optimizer_config.SGDConfig'> for field sgd is not allowed: use default_factory", "@Jayanth1812 and anyone else receiving a similar issue, it most likely has to do with your Python version. Downgrading to Python 3.9 works for me, but doing a downgrade might impact a lot of things. So to be safe and what worked for me was creating a new conda environment and following the installations here: https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/install.html\r\n\r\nAnd for Tensorflow GPU compatibility, after installing TensorFlow follow the instructions in section 4 'GPU Setup' in this document: https://www.tensorflow.org/install/pip", "@Jayanth1812, you can see in your error stack trace, that the error is caused by the `tensorflow` library, not by the `datasets` library. See:\r\n```\r\nFile \"C:\\Users\\x0133252\\AppData\\Local\\anaconda3\\envs\\tensorflow\\Lib\\dataclasses.py\"\r\n```\r\n\r\nYou should open an issue in their repository instead: https://github.com/tensorflow/tensorflow " ]
2022-11-11T13:53:49Z
2023-05-25T04:37:05Z
2022-11-14T15:27:37Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug When I import datasets using python 3.11 the dataclasses standard library raises the following error: `ValueError: mutable default <class 'datasets.utils.version.Version'> for field version is not allowed: use default_factory` When I tried to import the library using the following jupyter notebook: ``` %%bash # create python 3.11 conda env conda create --yes --quiet -n myenv -c conda-forge python=3.11 # activate is source activate myenv # install pyarrow /opt/conda/envs/myenv/bin/python -m pip install --quiet --extra-index-url https://pypi.fury.io/arrow-nightlies/ \ --prefer-binary --pre pyarrow # install datasets /opt/conda/envs/myenv/bin/python -m pip install --quiet datasets ``` ``` # create a python file that only imports datasets with open("import_datasets.py", 'w') as f: f.write("import datasets") # run it with the env !/opt/conda/envs/myenv/bin/python import_datasets.py ``` I get the following error: ``` Traceback (most recent call last): File "/kaggle/working/import_datasets.py", line 1, in <module> import datasets File "/opt/conda/envs/myenv/lib/python3.11/site-packages/datasets/__init__.py", line 45, in <module> from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder File "/opt/conda/envs/myenv/lib/python3.11/site-packages/datasets/builder.py", line 91, in <module> @dataclass ^^^^^^^^^ File "/opt/conda/envs/myenv/lib/python3.11/dataclasses.py", line 1221, in dataclass return wrap(cls) ^^^^^^^^^ File "/opt/conda/envs/myenv/lib/python3.11/dataclasses.py", line 1211, in wrap return _process_class(cls, init, repr, eq, order, unsafe_hash, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/myenv/lib/python3.11/dataclasses.py", line 959, in _process_class cls_fields.append(_get_field(cls, name, type, kw_only)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/myenv/lib/python3.11/dataclasses.py", line 816, in _get_field raise ValueError(f'mutable default {type(f.default)} for field ' ValueError: mutable default <class 'datasets.utils.version.Version'> for field version is not allowed: use default_factory ``` This is probably due to one of the following changes in the [dataclasses standard library](https://docs.python.org/3/library/dataclasses.html) in version 3.11: 1. Changed in version 3.11: Instead of looking for and disallowing objects of type list, dict, or set, unhashable objects are now not allowed as default values. Unhashability is used to approximate mutability. 2. fields may optionally specify a default value, using normal Python syntax: ``` @dataclass class C: a: int # 'a' has no default value b: int = 0 # assign a default value for 'b' In this example, both a and b will be included in the added __init__() method, which will be defined as: def __init__(self, a: int, b: int = 0): ``` 3. Changed in version 3.11: If a field name is already included in the __slots__ of a base class, it will not be included in the generated __slots__ to prevent [overriding them](https://docs.python.org/3/reference/datamodel.html#datamodel-note-slots). Therefore, do not use __slots__ to retrieve the field names of a dataclass. Use [fields()](https://docs.python.org/3/library/dataclasses.html#dataclasses.fields) instead. To be able to determine inherited slots, base class __slots__ may be any iterable, but not an iterator. 4. weakref_slot: If true (the default is False), add a slot named β€œ__weakref__”, which is required to make an instance weakref-able. It is an error to specify weakref_slot=True without also specifying slots=True. [TypeError](https://docs.python.org/3/library/exceptions.html#TypeError) will be raised if a field without a default value follows a field with a default value. This is true whether this occurs in a single class, or as a result of class inheritance. ### Steps to reproduce the bug Steps to reproduce the behavior: 1. go to [the notebook in kaggle](https://www.kaggle.com/yonikremer/repreducing-issue) 2. rub both of the cells ### Expected behavior I'm expecting no issues. This error should not occur. ### Environment info kaggle kernels, with default settings: pin to original environment, no accelerator.
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 4, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 4, "url": "https://api.github.com/repos/huggingface/datasets/issues/5230/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5230/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/5796
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5796/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5796/comments
https://api.github.com/repos/huggingface/datasets/issues/5796/events
https://github.com/huggingface/datasets/pull/5796
1,685,451,919
PR_kwDODunzps5PORm-
5,796
Spark docs
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010480 / 0.011353 (-0.000872) | 0.006743 / 0.011008 (-0.004265) | 0.126503 / 0.038508 (0.087995) | 0.036918 / 0.023109 (0.013808) | 0.387372 / 0.275898 (0.111474) | 0.456930 / 0.323480 (0.133450) | 0.008038 / 0.007986 (0.000052) | 0.005082 / 0.004328 (0.000753) | 0.093312 / 0.004250 (0.089062) | 0.065440 / 0.037052 (0.028387) | 0.378172 / 0.258489 (0.119683) | 0.430049 / 0.293841 (0.136208) | 0.054372 / 0.128546 (-0.074174) | 0.021875 / 0.075646 (-0.053772) | 0.441722 / 0.419271 (0.022450) | 0.063716 / 0.043533 (0.020183) | 0.375718 / 0.255139 (0.120579) | 0.413688 / 0.283200 (0.130488) | 0.122583 / 0.141683 (-0.019100) | 1.835992 / 1.452155 (0.383838) | 1.915862 / 1.492716 (0.423145) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.275305 / 0.018006 (0.257299) | 0.617170 / 0.000490 (0.616680) | 0.006467 / 0.000200 (0.006267) | 0.000117 / 0.000054 (0.000063) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031057 / 0.037411 (-0.006354) | 0.135178 / 0.014526 (0.120653) | 0.139265 / 0.176557 (-0.037292) | 0.221597 / 0.737135 (-0.515538) | 0.147632 / 0.296338 (-0.148706) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.640621 / 0.215209 (0.425411) | 6.354359 / 2.077655 (4.276704) | 2.748945 / 1.504120 (1.244825) | 2.396637 / 1.541195 (0.855442) | 2.395193 / 1.468490 (0.926703) | 1.209604 / 4.584777 (-3.375173) | 5.626901 / 3.745712 (1.881189) | 3.300941 / 5.269862 (-1.968920) | 2.123598 / 4.565676 (-2.442078) | 0.144270 / 0.424275 (-0.280005) | 0.015114 / 0.007607 (0.007507) | 0.812352 / 0.226044 (0.586307) | 8.024250 / 2.268929 (5.755322) | 3.557589 / 55.444624 (-51.887036) | 2.840632 / 6.876477 (-4.035845) | 3.152319 / 2.142072 (1.010246) | 1.447232 / 4.805227 (-3.357995) | 0.251740 / 6.500664 (-6.248924) | 0.083725 / 0.075469 (0.008256) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.568032 / 1.841788 (-0.273755) | 18.463860 / 8.074308 (10.389552) | 21.217395 / 10.191392 (11.026003) | 0.228457 / 0.680424 (-0.451967) | 0.031398 / 0.534201 (-0.502803) | 0.547627 / 0.579283 (-0.031656) | 0.642921 / 0.434364 (0.208557) | 0.687857 / 0.540337 (0.147520) | 0.800940 / 1.386936 (-0.585996) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009933 / 0.011353 (-0.001420) | 0.006065 / 0.011008 (-0.004943) | 0.102556 / 0.038508 (0.064048) | 0.034646 / 0.023109 (0.011537) | 0.437951 / 0.275898 (0.162053) | 0.482439 / 0.323480 (0.158959) | 0.007715 / 0.007986 (-0.000271) | 0.007426 / 0.004328 (0.003098) | 0.096427 / 0.004250 (0.092177) | 0.052983 / 0.037052 (0.015930) | 0.464533 / 0.258489 (0.206044) | 0.484848 / 0.293841 (0.191007) | 0.050415 / 0.128546 (-0.078131) | 0.021001 / 0.075646 (-0.054645) | 0.121214 / 0.419271 (-0.298058) | 0.061658 / 0.043533 (0.018125) | 0.431898 / 0.255139 (0.176759) | 0.482106 / 0.283200 (0.198907) | 0.128524 / 0.141683 (-0.013159) | 1.775714 / 1.452155 (0.323559) | 1.904738 / 1.492716 (0.412021) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.287641 / 0.018006 (0.269635) | 0.600667 / 0.000490 (0.600178) | 0.005097 / 0.000200 (0.004897) | 0.000112 / 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032836 / 0.037411 (-0.004575) | 0.133114 / 0.014526 (0.118588) | 0.150874 / 0.176557 (-0.025683) | 0.217069 / 0.737135 (-0.520066) | 0.160387 / 0.296338 (-0.135951) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.668444 / 0.215209 (0.453235) | 6.240015 / 2.077655 (4.162360) | 2.808661 / 1.504120 (1.304542) | 2.336550 / 1.541195 (0.795356) | 2.538973 / 1.468490 (1.070483) | 1.189292 / 4.584777 (-3.395485) | 5.781028 / 3.745712 (2.035315) | 3.149895 / 5.269862 (-2.119967) | 2.130646 / 4.565676 (-2.435030) | 0.144944 / 0.424275 (-0.279331) | 0.014650 / 0.007607 (0.007043) | 0.792313 / 0.226044 (0.566269) | 7.933108 / 2.268929 (5.664180) | 3.527527 / 55.444624 (-51.917098) | 2.864271 / 6.876477 (-4.012205) | 3.098330 / 2.142072 (0.956258) | 1.421208 / 4.805227 (-3.384019) | 0.255638 / 6.500664 (-6.245026) | 0.086971 / 0.075469 (0.011502) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.585317 / 1.841788 (-0.256471) | 18.643133 / 8.074308 (10.568825) | 21.921256 / 10.191392 (11.729864) | 0.215493 / 0.680424 (-0.464931) | 0.028348 / 0.534201 (-0.505853) | 0.556925 / 0.579283 (-0.022358) | 0.631480 / 0.434364 (0.197116) | 0.654026 / 0.540337 (0.113689) | 0.799727 / 1.386936 (-0.587209) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#62520514b524b5904c7e4f0beddab1971212a96a \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006516 / 0.011353 (-0.004837) | 0.004500 / 0.011008 (-0.006509) | 0.097639 / 0.038508 (0.059131) | 0.028336 / 0.023109 (0.005227) | 0.377263 / 0.275898 (0.101365) | 0.409209 / 0.323480 (0.085729) | 0.004832 / 0.007986 (-0.003154) | 0.004629 / 0.004328 (0.000301) | 0.075046 / 0.004250 (0.070795) | 0.034080 / 0.037052 (-0.002972) | 0.377565 / 0.258489 (0.119076) | 0.419204 / 0.293841 (0.125363) | 0.030343 / 0.128546 (-0.098203) | 0.011465 / 0.075646 (-0.064182) | 0.322777 / 0.419271 (-0.096494) | 0.043774 / 0.043533 (0.000241) | 0.375808 / 0.255139 (0.120669) | 0.402665 / 0.283200 (0.119465) | 0.086811 / 0.141683 (-0.054872) | 1.518686 / 1.452155 (0.066531) | 1.540381 / 1.492716 (0.047664) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.197730 / 0.018006 (0.179724) | 0.409285 / 0.000490 (0.408795) | 0.004739 / 0.000200 (0.004539) | 0.000084 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022974 / 0.037411 (-0.014437) | 0.096843 / 0.014526 (0.082317) | 0.103241 / 0.176557 (-0.073316) | 0.163691 / 0.737135 (-0.573444) | 0.107905 / 0.296338 (-0.188433) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.449408 / 0.215209 (0.234199) | 4.501375 / 2.077655 (2.423720) | 2.181491 / 1.504120 (0.677371) | 1.986153 / 1.541195 (0.444958) | 2.024735 / 1.468490 (0.556245) | 0.695368 / 4.584777 (-3.889409) | 3.416912 / 3.745712 (-0.328800) | 1.893343 / 5.269862 (-3.376519) | 1.275535 / 4.565676 (-3.290142) | 0.082772 / 0.424275 (-0.341503) | 0.012365 / 0.007607 (0.004758) | 0.553859 / 0.226044 (0.327814) | 5.540014 / 2.268929 (3.271085) | 2.634298 / 55.444624 (-52.810326) | 2.286686 / 6.876477 (-4.589790) | 2.384402 / 2.142072 (0.242330) | 0.806413 / 4.805227 (-3.998814) | 0.151757 / 6.500664 (-6.348907) | 0.067155 / 0.075469 (-0.008314) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.198776 / 1.841788 (-0.643012) | 13.517434 / 8.074308 (5.443126) | 13.926300 / 10.191392 (3.734908) | 0.141887 / 0.680424 (-0.538537) | 0.016571 / 0.534201 (-0.517630) | 0.383179 / 0.579283 (-0.196104) | 0.395189 / 0.434364 (-0.039175) | 0.479635 / 0.540337 (-0.060702) | 0.570576 / 1.386936 (-0.816360) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006691 / 0.011353 (-0.004662) | 0.004634 / 0.011008 (-0.006375) | 0.077087 / 0.038508 (0.038579) | 0.028281 / 0.023109 (0.005172) | 0.340108 / 0.275898 (0.064210) | 0.370611 / 0.323480 (0.047131) | 0.004997 / 0.007986 (-0.002988) | 0.003336 / 0.004328 (-0.000992) | 0.074814 / 0.004250 (0.070563) | 0.039001 / 0.037052 (0.001948) | 0.344225 / 0.258489 (0.085736) | 0.380621 / 0.293841 (0.086780) | 0.030858 / 0.128546 (-0.097689) | 0.011623 / 0.075646 (-0.064023) | 0.085016 / 0.419271 (-0.334256) | 0.042378 / 0.043533 (-0.001155) | 0.341428 / 0.255139 (0.086289) | 0.364823 / 0.283200 (0.081624) | 0.096695 / 0.141683 (-0.044988) | 1.527683 / 1.452155 (0.075528) | 1.585361 / 1.492716 (0.092645) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.184280 / 0.018006 (0.166274) | 0.397845 / 0.000490 (0.397355) | 0.004415 / 0.000200 (0.004215) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024296 / 0.037411 (-0.013115) | 0.101053 / 0.014526 (0.086527) | 0.108968 / 0.176557 (-0.067589) | 0.155732 / 0.737135 (-0.581403) | 0.112604 / 0.296338 (-0.183735) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440819 / 0.215209 (0.225609) | 4.394017 / 2.077655 (2.316363) | 2.092456 / 1.504120 (0.588336) | 1.880186 / 1.541195 (0.338991) | 1.918035 / 1.468490 (0.449545) | 0.698059 / 4.584777 (-3.886718) | 3.422598 / 3.745712 (-0.323114) | 1.860465 / 5.269862 (-3.409396) | 1.157788 / 4.565676 (-3.407889) | 0.083566 / 0.424275 (-0.340709) | 0.012440 / 0.007607 (0.004832) | 0.549526 / 0.226044 (0.323481) | 5.500623 / 2.268929 (3.231694) | 2.546980 / 55.444624 (-52.897644) | 2.199527 / 6.876477 (-4.676949) | 2.297276 / 2.142072 (0.155203) | 0.801580 / 4.805227 (-4.003648) | 0.151842 / 6.500664 (-6.348822) | 0.067165 / 0.075469 (-0.008305) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.329097 / 1.841788 (-0.512691) | 13.830354 / 8.074308 (5.756046) | 14.155250 / 10.191392 (3.963858) | 0.144517 / 0.680424 (-0.535907) | 0.016738 / 0.534201 (-0.517463) | 0.379337 / 0.579283 (-0.199946) | 0.391382 / 0.434364 (-0.042982) | 0.459153 / 0.540337 (-0.081184) | 0.547287 / 1.386936 (-0.839649) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2efb0289c887ec60d54e0715cd85c111cb45f9ee \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007176 / 0.011353 (-0.004177) | 0.005125 / 0.011008 (-0.005883) | 0.096060 / 0.038508 (0.057552) | 0.033262 / 0.023109 (0.010152) | 0.311461 / 0.275898 (0.035563) | 0.340673 / 0.323480 (0.017193) | 0.005700 / 0.007986 (-0.002286) | 0.005223 / 0.004328 (0.000894) | 0.072812 / 0.004250 (0.068561) | 0.042078 / 0.037052 (0.005025) | 0.320042 / 0.258489 (0.061553) | 0.346539 / 0.293841 (0.052698) | 0.035284 / 0.128546 (-0.093262) | 0.012021 / 0.075646 (-0.063625) | 0.331555 / 0.419271 (-0.087717) | 0.051058 / 0.043533 (0.007525) | 0.303001 / 0.255139 (0.047862) | 0.328431 / 0.283200 (0.045231) | 0.100954 / 0.141683 (-0.040729) | 1.407445 / 1.452155 (-0.044710) | 1.512826 / 1.492716 (0.020110) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216442 / 0.018006 (0.198436) | 0.446298 / 0.000490 (0.445809) | 0.004701 / 0.000200 (0.004501) | 0.000084 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028088 / 0.037411 (-0.009324) | 0.108669 / 0.014526 (0.094144) | 0.119597 / 0.176557 (-0.056960) | 0.178249 / 0.737135 (-0.558886) | 0.123914 / 0.296338 (-0.172424) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.413437 / 0.215209 (0.198228) | 4.136602 / 2.077655 (2.058947) | 1.875872 / 1.504120 (0.371752) | 1.680783 / 1.541195 (0.139588) | 1.757059 / 1.468490 (0.288569) | 0.711080 / 4.584777 (-3.873697) | 3.791701 / 3.745712 (0.045989) | 2.111612 / 5.269862 (-3.158250) | 1.351204 / 4.565676 (-3.214473) | 0.086477 / 0.424275 (-0.337798) | 0.012359 / 0.007607 (0.004752) | 0.504984 / 0.226044 (0.278940) | 5.040456 / 2.268929 (2.771527) | 2.266946 / 55.444624 (-53.177679) | 1.957827 / 6.876477 (-4.918650) | 2.120490 / 2.142072 (-0.021583) | 0.856148 / 4.805227 (-3.949079) | 0.172414 / 6.500664 (-6.328250) | 0.066833 / 0.075469 (-0.008636) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.198163 / 1.841788 (-0.643625) | 14.944930 / 8.074308 (6.870622) | 14.317196 / 10.191392 (4.125804) | 0.166104 / 0.680424 (-0.514320) | 0.017443 / 0.534201 (-0.516758) | 0.423025 / 0.579283 (-0.156258) | 0.437476 / 0.434364 (0.003112) | 0.500156 / 0.540337 (-0.040181) | 0.606226 / 1.386936 (-0.780710) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007417 / 0.011353 (-0.003936) | 0.005143 / 0.011008 (-0.005865) | 0.076401 / 0.038508 (0.037893) | 0.034818 / 0.023109 (0.011709) | 0.339633 / 0.275898 (0.063735) | 0.373839 / 0.323480 (0.050359) | 0.006004 / 0.007986 (-0.001982) | 0.005403 / 0.004328 (0.001075) | 0.074150 / 0.004250 (0.069899) | 0.050489 / 0.037052 (0.013436) | 0.343357 / 0.258489 (0.084868) | 0.377009 / 0.293841 (0.083168) | 0.035921 / 0.128546 (-0.092625) | 0.012197 / 0.075646 (-0.063449) | 0.087992 / 0.419271 (-0.331279) | 0.049452 / 0.043533 (0.005919) | 0.340495 / 0.255139 (0.085356) | 0.360277 / 0.283200 (0.077077) | 0.111114 / 0.141683 (-0.030569) | 1.463888 / 1.452155 (0.011734) | 1.548320 / 1.492716 (0.055604) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228437 / 0.018006 (0.210431) | 0.445120 / 0.000490 (0.444631) | 0.000392 / 0.000200 (0.000192) | 0.000058 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029965 / 0.037411 (-0.007446) | 0.113484 / 0.014526 (0.098958) | 0.125249 / 0.176557 (-0.051308) | 0.177201 / 0.737135 (-0.559934) | 0.128750 / 0.296338 (-0.167589) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420089 / 0.215209 (0.204880) | 4.195772 / 2.077655 (2.118117) | 2.021539 / 1.504120 (0.517419) | 1.825118 / 1.541195 (0.283924) | 1.904090 / 1.468490 (0.435600) | 0.716276 / 4.584777 (-3.868501) | 3.742257 / 3.745712 (-0.003455) | 3.368880 / 5.269862 (-1.900981) | 1.728285 / 4.565676 (-2.837392) | 0.087656 / 0.424275 (-0.336619) | 0.012263 / 0.007607 (0.004656) | 0.524321 / 0.226044 (0.298277) | 5.217610 / 2.268929 (2.948682) | 2.474670 / 55.444624 (-52.969955) | 2.135452 / 6.876477 (-4.741025) | 2.292578 / 2.142072 (0.150505) | 0.852109 / 4.805227 (-3.953119) | 0.172031 / 6.500664 (-6.328633) | 0.065230 / 0.075469 (-0.010240) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.260494 / 1.841788 (-0.581293) | 15.019167 / 8.074308 (6.944859) | 14.647586 / 10.191392 (4.456193) | 0.170578 / 0.680424 (-0.509846) | 0.017619 / 0.534201 (-0.516582) | 0.423116 / 0.579283 (-0.156167) | 0.426680 / 0.434364 (-0.007684) | 0.519563 / 0.540337 (-0.020775) | 0.619335 / 1.386936 (-0.767601) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e210dc20c19b5e6af05df9ca6e82984dfb42465f \"CML watermark\")\n" ]
2023-04-26T17:39:43Z
2023-04-27T16:41:50Z
2023-04-27T16:34:45Z
MEMBER
null
null
null
Added a "Use with Spark" doc page to document `Dataset.from_spark` following https://github.com/huggingface/datasets/pull/5701 cc @maddiedawson
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5796/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5796/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5796.diff", "html_url": "https://github.com/huggingface/datasets/pull/5796", "merged_at": "2023-04-27T16:34:45Z", "patch_url": "https://github.com/huggingface/datasets/pull/5796.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5796" }
https://api.github.com/repos/huggingface/datasets/issues/4920
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4920/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4920/comments
https://api.github.com/repos/huggingface/datasets/issues/4920/events
https://github.com/huggingface/datasets/issues/4920
1,357,564,589
I_kwDODunzps5Q6sqt
4,920
Unable to load local tsv files through load_dataset method
{ "avatar_url": "https://avatars.githubusercontent.com/u/44038517?v=4", "events_url": "https://api.github.com/users/DataNoob0723/events{/privacy}", "followers_url": "https://api.github.com/users/DataNoob0723/followers", "following_url": "https://api.github.com/users/DataNoob0723/following{/other_user}", "gists_url": "https://api.github.com/users/DataNoob0723/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/DataNoob0723", "id": 44038517, "login": "DataNoob0723", "node_id": "MDQ6VXNlcjQ0MDM4NTE3", "organizations_url": "https://api.github.com/users/DataNoob0723/orgs", "received_events_url": "https://api.github.com/users/DataNoob0723/received_events", "repos_url": "https://api.github.com/users/DataNoob0723/repos", "site_admin": false, "starred_url": "https://api.github.com/users/DataNoob0723/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DataNoob0723/subscriptions", "type": "User", "url": "https://api.github.com/users/DataNoob0723", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
null
[ "Hi @DataNoob0723,\r\n\r\nUnder the hood, we use `pandas` to load CSV/TSV files. Therefore, you should use \"csv\" and pass `sep=\"\\t\"`, as explained in our docs: https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/loading_methods#from-files\r\n```python\r\nds = load_dataset('csv', sep=\"\\t\", data_files=data_files)\r\n``` " ]
2022-08-31T16:13:39Z
2022-09-01T05:31:30Z
2022-09-01T05:31:30Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
## Describe the bug Unable to load local tsv files through load_dataset method. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug data_files = { 'train': 'train.tsv', 'test': 'test.tsv' } raw_datasets = load_dataset('tsv', data_files=data_files) ## Expected results I am pretty sure the data files exist in the current directory. The above code should load them as Datasets, but threw exceptions. ## Actual results --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) [<ipython-input-9-24207899c1af>](https://localhost:8080/#) in <module> ----> 1 raw_datasets = load_dataset('tsv', data_files='train.tsv') 2 frames [/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1244 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. " 1245 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}" -> 1246 ) from None 1247 raise e1 from None 1248 else: FileNotFoundError: Couldn't find a dataset script at /content/tsv/tsv.py or any data file in the same directory. Couldn't find 'tsv' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/main/datasets/tsv/tsv.py ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.4.0 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 6.0.1 - Pandas version: 1.3.5
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4920/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4920/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/5116
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5116/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5116/comments
https://api.github.com/repos/huggingface/datasets/issues/5116/events
https://github.com/huggingface/datasets/pull/5116
1,409,549,471
PR_kwDODunzps5A09sk
5,116
Use yaml for issue templates + revamp
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-10-14T15:53:13Z
2022-10-19T13:05:49Z
2022-10-19T13:03:22Z
COLLABORATOR
null
null
null
Use YAML instead of markdown (more expressive) for the issue templates. In addition, update their structure/fields to be more aligned with Transformers. PS: also removes the "add_dataset" PR template, as we no longer accept such PRs.
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5116/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5116/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5116.diff", "html_url": "https://github.com/huggingface/datasets/pull/5116", "merged_at": "2022-10-19T13:03:22Z", "patch_url": "https://github.com/huggingface/datasets/pull/5116.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5116" }
https://api.github.com/repos/huggingface/datasets/issues/5410
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5410/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5410/comments
https://api.github.com/repos/huggingface/datasets/issues/5410/events
https://github.com/huggingface/datasets/pull/5410
1,521,168,032
PR_kwDODunzps5GvnJH
5,410
Map-style Dataset to IterableDataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009812 / 0.011353 (-0.001540) | 0.005290 / 0.011008 (-0.005719) | 0.099728 / 0.038508 (0.061220) | 0.036712 / 0.023109 (0.013602) | 0.305924 / 0.275898 (0.030026) | 0.349844 / 0.323480 (0.026365) | 0.008353 / 0.007986 (0.000368) | 0.004464 / 0.004328 (0.000135) | 0.075329 / 0.004250 (0.071079) | 0.046146 / 0.037052 (0.009094) | 0.304197 / 0.258489 (0.045708) | 0.354245 / 0.293841 (0.060404) | 0.039270 / 0.128546 (-0.089276) | 0.012496 / 0.075646 (-0.063151) | 0.334390 / 0.419271 (-0.084882) | 0.049428 / 0.043533 (0.005896) | 0.297318 / 0.255139 (0.042179) | 0.315646 / 0.283200 (0.032447) | 0.106746 / 0.141683 (-0.034937) | 1.443562 / 1.452155 (-0.008593) | 1.546022 / 1.492716 (0.053305) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.303419 / 0.018006 (0.285413) | 0.536971 / 0.000490 (0.536481) | 0.001335 / 0.000200 (0.001135) | 0.000088 / 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030484 / 0.037411 (-0.006927) | 0.110043 / 0.014526 (0.095518) | 0.125265 / 0.176557 (-0.051291) | 0.171410 / 0.737135 (-0.565725) | 0.128978 / 0.296338 (-0.167361) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.398354 / 0.215209 (0.183145) | 3.984180 / 2.077655 (1.906526) | 1.781134 / 1.504120 (0.277014) | 1.589656 / 1.541195 (0.048462) | 1.704192 / 1.468490 (0.235702) | 0.682271 / 4.584777 (-3.902506) | 3.731504 / 3.745712 (-0.014208) | 2.243520 / 5.269862 (-3.026342) | 1.511334 / 4.565676 (-3.054343) | 0.084243 / 0.424275 (-0.340032) | 0.012261 / 0.007607 (0.004654) | 0.507499 / 0.226044 (0.281454) | 5.066037 / 2.268929 (2.797109) | 2.246107 / 55.444624 (-53.198517) | 1.921032 / 6.876477 (-4.955444) | 2.144111 / 2.142072 (0.002039) | 0.845233 / 4.805227 (-3.959995) | 0.165392 / 6.500664 (-6.335272) | 0.064201 / 0.075469 (-0.011268) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.217649 / 1.841788 (-0.624138) | 15.890487 / 8.074308 (7.816179) | 14.772039 / 10.191392 (4.580647) | 0.192901 / 0.680424 (-0.487523) | 0.029119 / 0.534201 (-0.505082) | 0.442904 / 0.579283 (-0.136380) | 0.451035 / 0.434364 (0.016671) | 0.520788 / 0.540337 (-0.019550) | 0.623588 / 1.386936 (-0.763348) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007452 / 0.011353 (-0.003901) | 0.005426 / 0.011008 (-0.005582) | 0.096488 / 0.038508 (0.057980) | 0.033575 / 0.023109 (0.010465) | 0.375688 / 0.275898 (0.099790) | 0.412393 / 0.323480 (0.088913) | 0.006050 / 0.007986 (-0.001936) | 0.004424 / 0.004328 (0.000095) | 0.073102 / 0.004250 (0.068852) | 0.052672 / 0.037052 (0.015620) | 0.379352 / 0.258489 (0.120862) | 0.436065 / 0.293841 (0.142224) | 0.036594 / 0.128546 (-0.091952) | 0.012380 / 0.075646 (-0.063266) | 0.332899 / 0.419271 (-0.086373) | 0.048859 / 0.043533 (0.005326) | 0.373215 / 0.255139 (0.118076) | 0.386990 / 0.283200 (0.103791) | 0.105166 / 0.141683 (-0.036517) | 1.490762 / 1.452155 (0.038607) | 1.611310 / 1.492716 (0.118593) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.333142 / 0.018006 (0.315136) | 0.537137 / 0.000490 (0.536647) | 0.000452 / 0.000200 (0.000252) | 0.000063 / 0.000054 (0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030368 / 0.037411 (-0.007043) | 0.109608 / 0.014526 (0.095083) | 0.124220 / 0.176557 (-0.052336) | 0.162834 / 0.737135 (-0.574301) | 0.128037 / 0.296338 (-0.168302) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440991 / 0.215209 (0.225782) | 4.400825 / 2.077655 (2.323170) | 2.158768 / 1.504120 (0.654648) | 1.968158 / 1.541195 (0.426963) | 2.085115 / 1.468490 (0.616625) | 0.710757 / 4.584777 (-3.874020) | 3.835441 / 3.745712 (0.089729) | 2.204118 / 5.269862 (-3.065744) | 1.378909 / 4.565676 (-3.186767) | 0.089149 / 0.424275 (-0.335126) | 0.013066 / 0.007607 (0.005459) | 0.539165 / 0.226044 (0.313121) | 5.414176 / 2.268929 (3.145248) | 2.677020 / 55.444624 (-52.767604) | 2.328334 / 6.876477 (-4.548143) | 2.518933 / 2.142072 (0.376860) | 0.840902 / 4.805227 (-3.964325) | 0.170365 / 6.500664 (-6.330299) | 0.063909 / 0.075469 (-0.011561) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.237205 / 1.841788 (-0.604583) | 15.678776 / 8.074308 (7.604468) | 14.118576 / 10.191392 (3.927184) | 0.167236 / 0.680424 (-0.513188) | 0.018177 / 0.534201 (-0.516024) | 0.426680 / 0.579283 (-0.152603) | 0.425126 / 0.434364 (-0.009238) | 0.501755 / 0.540337 (-0.038582) | 0.592754 / 1.386936 (-0.794182) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008708 / 0.011353 (-0.002645) | 0.004462 / 0.011008 (-0.006546) | 0.100159 / 0.038508 (0.061651) | 0.029543 / 0.023109 (0.006434) | 0.304056 / 0.275898 (0.028158) | 0.367098 / 0.323480 (0.043618) | 0.007049 / 0.007986 (-0.000937) | 0.003294 / 0.004328 (-0.001034) | 0.076954 / 0.004250 (0.072703) | 0.036850 / 0.037052 (-0.000202) | 0.307556 / 0.258489 (0.049067) | 0.348327 / 0.293841 (0.054486) | 0.033520 / 0.128546 (-0.095026) | 0.011312 / 0.075646 (-0.064334) | 0.317588 / 0.419271 (-0.101684) | 0.040196 / 0.043533 (-0.003337) | 0.298330 / 0.255139 (0.043191) | 0.333821 / 0.283200 (0.050622) | 0.086584 / 0.141683 (-0.055099) | 1.480205 / 1.452155 (0.028050) | 1.520975 / 1.492716 (0.028259) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.186641 / 0.018006 (0.168635) | 0.414420 / 0.000490 (0.413930) | 0.003021 / 0.000200 (0.002821) | 0.000073 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022953 / 0.037411 (-0.014458) | 0.097338 / 0.014526 (0.082812) | 0.104985 / 0.176557 (-0.071572) | 0.139208 / 0.737135 (-0.597927) | 0.108031 / 0.296338 (-0.188307) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417969 / 0.215209 (0.202759) | 4.173189 / 2.077655 (2.095534) | 1.862813 / 1.504120 (0.358693) | 1.653226 / 1.541195 (0.112031) | 1.725917 / 1.468490 (0.257426) | 0.701038 / 4.584777 (-3.883739) | 3.350500 / 3.745712 (-0.395213) | 1.913156 / 5.269862 (-3.356705) | 1.267597 / 4.565676 (-3.298079) | 0.082197 / 0.424275 (-0.342078) | 0.012499 / 0.007607 (0.004892) | 0.520173 / 0.226044 (0.294128) | 5.219981 / 2.268929 (2.951053) | 2.306029 / 55.444624 (-53.138595) | 1.948169 / 6.876477 (-4.928307) | 2.013160 / 2.142072 (-0.128912) | 0.813325 / 4.805227 (-3.991902) | 0.149729 / 6.500664 (-6.350935) | 0.065492 / 0.075469 (-0.009977) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.194163 / 1.841788 (-0.647625) | 13.739562 / 8.074308 (5.665254) | 13.881988 / 10.191392 (3.690596) | 0.138180 / 0.680424 (-0.542244) | 0.029031 / 0.534201 (-0.505170) | 0.387858 / 0.579283 (-0.191425) | 0.395171 / 0.434364 (-0.039193) | 0.446349 / 0.540337 (-0.093988) | 0.527073 / 1.386936 (-0.859863) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006504 / 0.011353 (-0.004849) | 0.004564 / 0.011008 (-0.006444) | 0.099108 / 0.038508 (0.060599) | 0.027420 / 0.023109 (0.004311) | 0.340712 / 0.275898 (0.064814) | 0.391613 / 0.323480 (0.068133) | 0.004977 / 0.007986 (-0.003009) | 0.003375 / 0.004328 (-0.000953) | 0.076403 / 0.004250 (0.072152) | 0.036650 / 0.037052 (-0.000402) | 0.341948 / 0.258489 (0.083459) | 0.392065 / 0.293841 (0.098224) | 0.031802 / 0.128546 (-0.096745) | 0.011659 / 0.075646 (-0.063987) | 0.320099 / 0.419271 (-0.099173) | 0.041615 / 0.043533 (-0.001918) | 0.342125 / 0.255139 (0.086986) | 0.372833 / 0.283200 (0.089633) | 0.089032 / 0.141683 (-0.052650) | 1.486691 / 1.452155 (0.034536) | 1.567326 / 1.492716 (0.074610) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.193123 / 0.018006 (0.175117) | 0.404062 / 0.000490 (0.403573) | 0.003460 / 0.000200 (0.003260) | 0.000079 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024565 / 0.037411 (-0.012846) | 0.098958 / 0.014526 (0.084432) | 0.108701 / 0.176557 (-0.067855) | 0.142567 / 0.737135 (-0.594569) | 0.111048 / 0.296338 (-0.185290) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.474549 / 0.215209 (0.259340) | 4.753776 / 2.077655 (2.676121) | 2.435528 / 1.504120 (0.931409) | 2.234491 / 1.541195 (0.693297) | 2.269474 / 1.468490 (0.800984) | 0.695636 / 4.584777 (-3.889141) | 3.367816 / 3.745712 (-0.377896) | 1.854828 / 5.269862 (-3.415034) | 1.159729 / 4.565676 (-3.405948) | 0.082267 / 0.424275 (-0.342008) | 0.012483 / 0.007607 (0.004876) | 0.578490 / 0.226044 (0.352446) | 5.814490 / 2.268929 (3.545561) | 2.893310 / 55.444624 (-52.551314) | 2.540555 / 6.876477 (-4.335922) | 2.573705 / 2.142072 (0.431633) | 0.800545 / 4.805227 (-4.004682) | 0.151306 / 6.500664 (-6.349358) | 0.067925 / 0.075469 (-0.007544) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.294645 / 1.841788 (-0.547142) | 13.641842 / 8.074308 (5.567534) | 14.015200 / 10.191392 (3.823808) | 0.128829 / 0.680424 (-0.551595) | 0.016870 / 0.534201 (-0.517331) | 0.389137 / 0.579283 (-0.190146) | 0.388384 / 0.434364 (-0.045980) | 0.447711 / 0.540337 (-0.092627) | 0.540637 / 1.386936 (-0.846299) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#45ad185b9040a68285080b6099ed3af58442ccb2 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012282 / 0.011353 (0.000929) | 0.006328 / 0.011008 (-0.004680) | 0.129666 / 0.038508 (0.091158) | 0.039403 / 0.023109 (0.016294) | 0.375464 / 0.275898 (0.099566) | 0.463167 / 0.323480 (0.139687) | 0.010329 / 0.007986 (0.002344) | 0.005111 / 0.004328 (0.000782) | 0.108727 / 0.004250 (0.104476) | 0.047156 / 0.037052 (0.010103) | 0.381869 / 0.258489 (0.123380) | 0.441936 / 0.293841 (0.148095) | 0.054750 / 0.128546 (-0.073796) | 0.019809 / 0.075646 (-0.055837) | 0.436389 / 0.419271 (0.017118) | 0.066585 / 0.043533 (0.023052) | 0.402108 / 0.255139 (0.146969) | 0.424571 / 0.283200 (0.141371) | 0.118326 / 0.141683 (-0.023357) | 1.870175 / 1.452155 (0.418020) | 1.878720 / 1.492716 (0.386004) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.012863 / 0.018006 (-0.005144) | 0.528670 / 0.000490 (0.528181) | 0.006057 / 0.000200 (0.005857) | 0.000124 / 0.000054 (0.000069) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030091 / 0.037411 (-0.007320) | 0.136143 / 0.014526 (0.121618) | 0.148931 / 0.176557 (-0.027626) | 0.179578 / 0.737135 (-0.557558) | 0.144528 / 0.296338 (-0.151810) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.594080 / 0.215209 (0.378871) | 6.029101 / 2.077655 (3.951446) | 2.443084 / 1.504120 (0.938964) | 2.123949 / 1.541195 (0.582754) | 2.183021 / 1.468490 (0.714531) | 1.235453 / 4.584777 (-3.349324) | 5.585121 / 3.745712 (1.839408) | 3.208510 / 5.269862 (-2.061351) | 2.090334 / 4.565676 (-2.475342) | 0.150353 / 0.424275 (-0.273922) | 0.016787 / 0.007607 (0.009180) | 0.797561 / 0.226044 (0.571516) | 7.756291 / 2.268929 (5.487363) | 3.283638 / 55.444624 (-52.160986) | 2.527441 / 6.876477 (-4.349036) | 2.590765 / 2.142072 (0.448692) | 1.446818 / 4.805227 (-3.358409) | 0.250563 / 6.500664 (-6.250101) | 0.077919 / 0.075469 (0.002450) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.612022 / 1.841788 (-0.229765) | 18.363316 / 8.074308 (10.289008) | 22.578570 / 10.191392 (12.387178) | 0.232801 / 0.680424 (-0.447623) | 0.048232 / 0.534201 (-0.485969) | 0.549518 / 0.579283 (-0.029766) | 0.624663 / 0.434364 (0.190299) | 0.674745 / 0.540337 (0.134408) | 0.803489 / 1.386936 (-0.583447) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009872 / 0.011353 (-0.001481) | 0.006593 / 0.011008 (-0.004415) | 0.139248 / 0.038508 (0.100740) | 0.035708 / 0.023109 (0.012598) | 0.551335 / 0.275898 (0.275437) | 0.544995 / 0.323480 (0.221515) | 0.007085 / 0.007986 (-0.000900) | 0.004742 / 0.004328 (0.000413) | 0.095823 / 0.004250 (0.091572) | 0.051674 / 0.037052 (0.014621) | 0.463405 / 0.258489 (0.204916) | 0.640392 / 0.293841 (0.346551) | 0.055242 / 0.128546 (-0.073304) | 0.022602 / 0.075646 (-0.053044) | 0.419171 / 0.419271 (-0.000100) | 0.062986 / 0.043533 (0.019453) | 0.503683 / 0.255139 (0.248544) | 0.568719 / 0.283200 (0.285519) | 0.113906 / 0.141683 (-0.027777) | 1.825248 / 1.452155 (0.373094) | 1.985667 / 1.492716 (0.492951) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237478 / 0.018006 (0.219472) | 0.528861 / 0.000490 (0.528371) | 0.008507 / 0.000200 (0.008307) | 0.000158 / 0.000054 (0.000103) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033536 / 0.037411 (-0.003875) | 0.144202 / 0.014526 (0.129677) | 0.139472 / 0.176557 (-0.037084) | 0.184540 / 0.737135 (-0.552596) | 0.147818 / 0.296338 (-0.148520) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.671654 / 0.215209 (0.456445) | 6.616368 / 2.077655 (4.538713) | 2.805634 / 1.504120 (1.301514) | 2.482890 / 1.541195 (0.941695) | 2.547686 / 1.468490 (1.079195) | 1.289169 / 4.584777 (-3.295608) | 5.551436 / 3.745712 (1.805724) | 5.228500 / 5.269862 (-0.041362) | 2.456706 / 4.565676 (-2.108970) | 0.148556 / 0.424275 (-0.275720) | 0.015290 / 0.007607 (0.007683) | 0.837090 / 0.226044 (0.611045) | 8.373561 / 2.268929 (6.104632) | 3.663910 / 55.444624 (-51.780714) | 2.927117 / 6.876477 (-3.949360) | 2.976785 / 2.142072 (0.834712) | 1.501618 / 4.805227 (-3.303609) | 0.263321 / 6.500664 (-6.237343) | 0.082644 / 0.075469 (0.007175) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.707419 / 1.841788 (-0.134368) | 18.371117 / 8.074308 (10.296809) | 22.015154 / 10.191392 (11.823762) | 0.232066 / 0.680424 (-0.448357) | 0.027149 / 0.534201 (-0.507052) | 0.544450 / 0.579283 (-0.034833) | 0.605134 / 0.434364 (0.170770) | 0.656063 / 0.540337 (0.115725) | 0.788121 / 1.386936 (-0.598815) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f1e0ec31e07e4bc0469f4acfed601d9c71e9a459 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008952 / 0.011353 (-0.002401) | 0.005592 / 0.011008 (-0.005416) | 0.101138 / 0.038508 (0.062630) | 0.035573 / 0.023109 (0.012464) | 0.295959 / 0.275898 (0.020060) | 0.365347 / 0.323480 (0.041867) | 0.008136 / 0.007986 (0.000150) | 0.004479 / 0.004328 (0.000150) | 0.078806 / 0.004250 (0.074556) | 0.045180 / 0.037052 (0.008127) | 0.321687 / 0.258489 (0.063198) | 0.345874 / 0.293841 (0.052033) | 0.038720 / 0.128546 (-0.089826) | 0.012534 / 0.075646 (-0.063112) | 0.335571 / 0.419271 (-0.083700) | 0.049048 / 0.043533 (0.005515) | 0.294756 / 0.255139 (0.039617) | 0.327496 / 0.283200 (0.044296) | 0.109181 / 0.141683 (-0.032502) | 1.417068 / 1.452155 (-0.035087) | 1.455473 / 1.492716 (-0.037244) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.267774 / 0.018006 (0.249768) | 0.538546 / 0.000490 (0.538056) | 0.001755 / 0.000200 (0.001555) | 0.000090 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026839 / 0.037411 (-0.010572) | 0.105862 / 0.014526 (0.091336) | 0.118278 / 0.176557 (-0.058279) | 0.157926 / 0.737135 (-0.579209) | 0.124700 / 0.296338 (-0.171638) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.399060 / 0.215209 (0.183851) | 3.991409 / 2.077655 (1.913754) | 1.763569 / 1.504120 (0.259449) | 1.579602 / 1.541195 (0.038407) | 1.652928 / 1.468490 (0.184438) | 0.692962 / 4.584777 (-3.891815) | 3.784635 / 3.745712 (0.038922) | 3.249341 / 5.269862 (-2.020521) | 1.815711 / 4.565676 (-2.749966) | 0.084384 / 0.424275 (-0.339891) | 0.012546 / 0.007607 (0.004939) | 0.521397 / 0.226044 (0.295352) | 5.075824 / 2.268929 (2.806895) | 2.258353 / 55.444624 (-53.186272) | 1.925220 / 6.876477 (-4.951256) | 2.002821 / 2.142072 (-0.139252) | 0.830507 / 4.805227 (-3.974720) | 0.165845 / 6.500664 (-6.334819) | 0.063905 / 0.075469 (-0.011565) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.198726 / 1.841788 (-0.643061) | 14.804448 / 8.074308 (6.730139) | 12.855167 / 10.191392 (2.663775) | 0.167932 / 0.680424 (-0.512492) | 0.028643 / 0.534201 (-0.505558) | 0.441224 / 0.579283 (-0.138059) | 0.434924 / 0.434364 (0.000560) | 0.516188 / 0.540337 (-0.024150) | 0.605017 / 1.386936 (-0.781919) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007031 / 0.011353 (-0.004322) | 0.005157 / 0.011008 (-0.005851) | 0.086943 / 0.038508 (0.048434) | 0.031377 / 0.023109 (0.008268) | 0.334810 / 0.275898 (0.058912) | 0.368590 / 0.323480 (0.045110) | 0.005973 / 0.007986 (-0.002013) | 0.004173 / 0.004328 (-0.000155) | 0.067033 / 0.004250 (0.062783) | 0.054070 / 0.037052 (0.017018) | 0.332232 / 0.258489 (0.073743) | 0.384982 / 0.293841 (0.091141) | 0.034023 / 0.128546 (-0.094524) | 0.011301 / 0.075646 (-0.064345) | 0.295644 / 0.419271 (-0.123628) | 0.045589 / 0.043533 (0.002056) | 0.330739 / 0.255139 (0.075600) | 0.352841 / 0.283200 (0.069642) | 0.104829 / 0.141683 (-0.036854) | 1.329360 / 1.452155 (-0.122794) | 1.437956 / 1.492716 (-0.054760) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.299187 / 0.018006 (0.281181) | 0.563407 / 0.000490 (0.562917) | 0.004179 / 0.000200 (0.003979) | 0.000114 / 0.000054 (0.000060) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027405 / 0.037411 (-0.010006) | 0.097498 / 0.014526 (0.082972) | 0.114265 / 0.176557 (-0.062292) | 0.146823 / 0.737135 (-0.590313) | 0.117948 / 0.296338 (-0.178391) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.378756 / 0.215209 (0.163547) | 3.774804 / 2.077655 (1.697150) | 1.804149 / 1.504120 (0.300029) | 1.626312 / 1.541195 (0.085117) | 1.731111 / 1.468490 (0.262620) | 0.633493 / 4.584777 (-3.951284) | 3.488220 / 3.745712 (-0.257492) | 3.064710 / 5.269862 (-2.205151) | 1.690647 / 4.565676 (-2.875029) | 0.076093 / 0.424275 (-0.348182) | 0.010820 / 0.007607 (0.003213) | 0.465091 / 0.226044 (0.239046) | 4.676842 / 2.268929 (2.407913) | 2.297381 / 55.444624 (-53.147244) | 1.960355 / 6.876477 (-4.916122) | 1.983742 / 2.142072 (-0.158330) | 0.739525 / 4.805227 (-4.065702) | 0.152663 / 6.500664 (-6.348001) | 0.057316 / 0.075469 (-0.018153) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.104721 / 1.841788 (-0.737067) | 14.577171 / 8.074308 (6.502863) | 13.680402 / 10.191392 (3.489010) | 0.182234 / 0.680424 (-0.498190) | 0.018853 / 0.534201 (-0.515348) | 0.426194 / 0.579283 (-0.153089) | 0.429202 / 0.434364 (-0.005162) | 0.543125 / 0.540337 (0.002788) | 0.645887 / 1.386936 (-0.741049) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f830952573bdc59f8732b8f1a13f70d9187e0a65 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010055 / 0.011353 (-0.001298) | 0.005576 / 0.011008 (-0.005432) | 0.100059 / 0.038508 (0.061551) | 0.038535 / 0.023109 (0.015425) | 0.297538 / 0.275898 (0.021640) | 0.368117 / 0.323480 (0.044637) | 0.008540 / 0.007986 (0.000555) | 0.004469 / 0.004328 (0.000141) | 0.075801 / 0.004250 (0.071551) | 0.046604 / 0.037052 (0.009552) | 0.307242 / 0.258489 (0.048753) | 0.343949 / 0.293841 (0.050108) | 0.039353 / 0.128546 (-0.089194) | 0.012446 / 0.075646 (-0.063200) | 0.334628 / 0.419271 (-0.084643) | 0.051628 / 0.043533 (0.008095) | 0.298726 / 0.255139 (0.043587) | 0.316010 / 0.283200 (0.032810) | 0.120564 / 0.141683 (-0.021119) | 1.459396 / 1.452155 (0.007241) | 1.493682 / 1.492716 (0.000965) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.011702 / 0.018006 (-0.006304) | 0.570261 / 0.000490 (0.569771) | 0.003760 / 0.000200 (0.003560) | 0.000091 / 0.000054 (0.000037) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028806 / 0.037411 (-0.008605) | 0.112150 / 0.014526 (0.097625) | 0.123140 / 0.176557 (-0.053417) | 0.173055 / 0.737135 (-0.564080) | 0.130060 / 0.296338 (-0.166279) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.398216 / 0.215209 (0.183007) | 3.978677 / 2.077655 (1.901022) | 1.754229 / 1.504120 (0.250109) | 1.561892 / 1.541195 (0.020697) | 1.679138 / 1.468490 (0.210648) | 0.690254 / 4.584777 (-3.894523) | 3.817698 / 3.745712 (0.071986) | 2.177854 / 5.269862 (-3.092008) | 1.361860 / 4.565676 (-3.203816) | 0.084108 / 0.424275 (-0.340167) | 0.012640 / 0.007607 (0.005033) | 0.504385 / 0.226044 (0.278341) | 5.034103 / 2.268929 (2.765174) | 2.254032 / 55.444624 (-53.190593) | 1.910439 / 6.876477 (-4.966038) | 2.003515 / 2.142072 (-0.138558) | 0.839747 / 4.805227 (-3.965480) | 0.165654 / 6.500664 (-6.335010) | 0.063483 / 0.075469 (-0.011986) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.187521 / 1.841788 (-0.654267) | 15.381121 / 8.074308 (7.306812) | 14.579418 / 10.191392 (4.388026) | 0.199221 / 0.680424 (-0.481202) | 0.029335 / 0.534201 (-0.504866) | 0.443159 / 0.579283 (-0.136124) | 0.447772 / 0.434364 (0.013408) | 0.545071 / 0.540337 (0.004733) | 0.650494 / 1.386936 (-0.736442) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007675 / 0.011353 (-0.003677) | 0.005364 / 0.011008 (-0.005644) | 0.097921 / 0.038508 (0.059413) | 0.033645 / 0.023109 (0.010536) | 0.404818 / 0.275898 (0.128920) | 0.429983 / 0.323480 (0.106503) | 0.006106 / 0.007986 (-0.001879) | 0.005281 / 0.004328 (0.000953) | 0.073762 / 0.004250 (0.069512) | 0.053065 / 0.037052 (0.016012) | 0.400657 / 0.258489 (0.142168) | 0.447743 / 0.293841 (0.153902) | 0.036782 / 0.128546 (-0.091765) | 0.012593 / 0.075646 (-0.063054) | 0.332825 / 0.419271 (-0.086446) | 0.049424 / 0.043533 (0.005891) | 0.400397 / 0.255139 (0.145258) | 0.414794 / 0.283200 (0.131594) | 0.106555 / 0.141683 (-0.035128) | 1.466917 / 1.452155 (0.014762) | 1.571351 / 1.492716 (0.078635) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.254337 / 0.018006 (0.236331) | 0.568360 / 0.000490 (0.567870) | 0.000445 / 0.000200 (0.000245) | 0.000059 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031044 / 0.037411 (-0.006367) | 0.112282 / 0.014526 (0.097756) | 0.127205 / 0.176557 (-0.049352) | 0.166551 / 0.737135 (-0.570584) | 0.130520 / 0.296338 (-0.165818) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.442906 / 0.215209 (0.227697) | 4.430218 / 2.077655 (2.352563) | 2.287251 / 1.504120 (0.783132) | 2.112345 / 1.541195 (0.571150) | 2.240952 / 1.468490 (0.772462) | 0.713800 / 4.584777 (-3.870977) | 3.884161 / 3.745712 (0.138449) | 2.166901 / 5.269862 (-3.102960) | 1.374490 / 4.565676 (-3.191187) | 0.087548 / 0.424275 (-0.336727) | 0.012369 / 0.007607 (0.004761) | 0.540783 / 0.226044 (0.314739) | 5.396187 / 2.268929 (3.127258) | 2.779636 / 55.444624 (-52.664988) | 2.434220 / 6.876477 (-4.442257) | 2.508180 / 2.142072 (0.366107) | 0.852470 / 4.805227 (-3.952757) | 0.171266 / 6.500664 (-6.329398) | 0.065463 / 0.075469 (-0.010006) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.241720 / 1.841788 (-0.600067) | 15.332568 / 8.074308 (7.258260) | 13.688723 / 10.191392 (3.497331) | 0.145150 / 0.680424 (-0.535273) | 0.017694 / 0.534201 (-0.516507) | 0.426078 / 0.579283 (-0.153205) | 0.441189 / 0.434364 (0.006825) | 0.540284 / 0.540337 (-0.000054) | 0.657548 / 1.386936 (-0.729388) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c47ecf71362f6b6290b6471b30e77184a5e1df31 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008604 / 0.011353 (-0.002749) | 0.004566 / 0.011008 (-0.006442) | 0.099607 / 0.038508 (0.061099) | 0.029628 / 0.023109 (0.006519) | 0.300481 / 0.275898 (0.024583) | 0.342596 / 0.323480 (0.019116) | 0.007003 / 0.007986 (-0.000982) | 0.003408 / 0.004328 (-0.000920) | 0.079076 / 0.004250 (0.074826) | 0.034104 / 0.037052 (-0.002948) | 0.303856 / 0.258489 (0.045367) | 0.348729 / 0.293841 (0.054888) | 0.033752 / 0.128546 (-0.094794) | 0.011497 / 0.075646 (-0.064149) | 0.321568 / 0.419271 (-0.097704) | 0.041472 / 0.043533 (-0.002061) | 0.303396 / 0.255139 (0.048257) | 0.331121 / 0.283200 (0.047921) | 0.086203 / 0.141683 (-0.055480) | 1.476995 / 1.452155 (0.024840) | 1.539428 / 1.492716 (0.046712) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.215810 / 0.018006 (0.197803) | 0.414292 / 0.000490 (0.413802) | 0.000388 / 0.000200 (0.000188) | 0.000058 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023441 / 0.037411 (-0.013970) | 0.098463 / 0.014526 (0.083938) | 0.105435 / 0.176557 (-0.071121) | 0.139736 / 0.737135 (-0.597399) | 0.109467 / 0.296338 (-0.186872) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418244 / 0.215209 (0.203035) | 4.160693 / 2.077655 (2.083039) | 1.878895 / 1.504120 (0.374775) | 1.679338 / 1.541195 (0.138143) | 1.730384 / 1.468490 (0.261894) | 0.688603 / 4.584777 (-3.896174) | 3.393542 / 3.745712 (-0.352170) | 1.901337 / 5.269862 (-3.368525) | 1.447269 / 4.565676 (-3.118408) | 0.083003 / 0.424275 (-0.341272) | 0.012574 / 0.007607 (0.004967) | 0.526363 / 0.226044 (0.300318) | 5.275159 / 2.268929 (3.006230) | 2.323642 / 55.444624 (-53.120982) | 1.982929 / 6.876477 (-4.893548) | 2.014081 / 2.142072 (-0.127991) | 0.809466 / 4.805227 (-3.995761) | 0.149038 / 6.500664 (-6.351626) | 0.064394 / 0.075469 (-0.011075) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.207439 / 1.841788 (-0.634349) | 13.691048 / 8.074308 (5.616740) | 13.880965 / 10.191392 (3.689573) | 0.148553 / 0.680424 (-0.531871) | 0.028397 / 0.534201 (-0.505804) | 0.391818 / 0.579283 (-0.187465) | 0.407181 / 0.434364 (-0.027183) | 0.481163 / 0.540337 (-0.059175) | 0.570689 / 1.386936 (-0.816247) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006361 / 0.011353 (-0.004992) | 0.004520 / 0.011008 (-0.006488) | 0.097679 / 0.038508 (0.059171) | 0.027223 / 0.023109 (0.004113) | 0.407966 / 0.275898 (0.132068) | 0.439868 / 0.323480 (0.116388) | 0.004625 / 0.007986 (-0.003360) | 0.004039 / 0.004328 (-0.000289) | 0.074548 / 0.004250 (0.070298) | 0.034957 / 0.037052 (-0.002095) | 0.412762 / 0.258489 (0.154273) | 0.449716 / 0.293841 (0.155875) | 0.031272 / 0.128546 (-0.097274) | 0.011598 / 0.075646 (-0.064049) | 0.320922 / 0.419271 (-0.098349) | 0.041250 / 0.043533 (-0.002283) | 0.411439 / 0.255139 (0.156300) | 0.429722 / 0.283200 (0.146523) | 0.087161 / 0.141683 (-0.054522) | 1.512573 / 1.452155 (0.060418) | 1.569385 / 1.492716 (0.076668) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222612 / 0.018006 (0.204606) | 0.409086 / 0.000490 (0.408596) | 0.004246 / 0.000200 (0.004046) | 0.000083 / 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024324 / 0.037411 (-0.013087) | 0.099055 / 0.014526 (0.084530) | 0.106809 / 0.176557 (-0.069748) | 0.141275 / 0.737135 (-0.595860) | 0.109426 / 0.296338 (-0.186913) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.469736 / 0.215209 (0.254527) | 4.686900 / 2.077655 (2.609246) | 2.413392 / 1.504120 (0.909272) | 2.217366 / 1.541195 (0.676171) | 2.266957 / 1.468490 (0.798467) | 0.698647 / 4.584777 (-3.886129) | 3.389317 / 3.745712 (-0.356395) | 1.862315 / 5.269862 (-3.407546) | 1.160931 / 4.565676 (-3.404746) | 0.082829 / 0.424275 (-0.341446) | 0.012627 / 0.007607 (0.005020) | 0.568027 / 0.226044 (0.341983) | 5.683220 / 2.268929 (3.414291) | 2.865701 / 55.444624 (-52.578924) | 2.522401 / 6.876477 (-4.354076) | 2.542395 / 2.142072 (0.400323) | 0.801224 / 4.805227 (-4.004003) | 0.149946 / 6.500664 (-6.350718) | 0.065447 / 0.075469 (-0.010023) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.283756 / 1.841788 (-0.558032) | 13.903662 / 8.074308 (5.829354) | 13.238389 / 10.191392 (3.046997) | 0.142304 / 0.680424 (-0.538120) | 0.016922 / 0.534201 (-0.517279) | 0.377797 / 0.579283 (-0.201487) | 0.382460 / 0.434364 (-0.051904) | 0.464645 / 0.540337 (-0.075692) | 0.556270 / 1.386936 (-0.830666) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#675cf2910c5e6f083ed6664a7bffba9a58f78309 \"CML watermark\")\n", "> I think this would be more of a Conceptual Guide doc since this is more explanatory and compares the differences between a Dataset and an IterableDataset\r\n\r\nsounds good to me !\r\n\r\n> There are definitely places in the docs where we can add a nice and link to this doc though to build up the user's understanding of this topic. For example, in the Know your dataset [tutorial](https://huggingface.co/docs/datasets/access), we only introduce the regular Dataset object and not the IterableDataset. We can add a section there for IterableDataset and then link to this doc that explains the difference between the two πŸ™‚\r\n\r\ngood idea, thanks :)", "I'll open a PR to add a section on `IterableDataset`'s in the tutorial, and once you're done editing this doc I can give it a final polish! πŸ˜„ ", "I moved the doc page to conceptual guides and took your suggestions into account :)\r\n\r\nI think this is ready for final review now", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009890 / 0.011353 (-0.001463) | 0.005156 / 0.011008 (-0.005852) | 0.099493 / 0.038508 (0.060984) | 0.036671 / 0.023109 (0.013562) | 0.304686 / 0.275898 (0.028788) | 0.339070 / 0.323480 (0.015590) | 0.008466 / 0.007986 (0.000481) | 0.005863 / 0.004328 (0.001534) | 0.075082 / 0.004250 (0.070832) | 0.045926 / 0.037052 (0.008874) | 0.303157 / 0.258489 (0.044668) | 0.363710 / 0.293841 (0.069870) | 0.038497 / 0.128546 (-0.090049) | 0.012063 / 0.075646 (-0.063583) | 0.334463 / 0.419271 (-0.084808) | 0.048161 / 0.043533 (0.004628) | 0.300431 / 0.255139 (0.045292) | 0.330344 / 0.283200 (0.047145) | 0.105509 / 0.141683 (-0.036174) | 1.475242 / 1.452155 (0.023087) | 1.550624 / 1.492716 (0.057908) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.245749 / 0.018006 (0.227743) | 0.575091 / 0.000490 (0.574601) | 0.001556 / 0.000200 (0.001357) | 0.000089 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030447 / 0.037411 (-0.006964) | 0.110982 / 0.014526 (0.096456) | 0.126760 / 0.176557 (-0.049797) | 0.173375 / 0.737135 (-0.563760) | 0.128799 / 0.296338 (-0.167539) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.392861 / 0.215209 (0.177651) | 3.911231 / 2.077655 (1.833576) | 1.757413 / 1.504120 (0.253293) | 1.563287 / 1.541195 (0.022093) | 1.658678 / 1.468490 (0.190188) | 0.677244 / 4.584777 (-3.907533) | 3.754917 / 3.745712 (0.009205) | 3.779417 / 5.269862 (-1.490444) | 1.993159 / 4.565676 (-2.572517) | 0.084425 / 0.424275 (-0.339850) | 0.012500 / 0.007607 (0.004893) | 0.501788 / 0.226044 (0.275743) | 5.003173 / 2.268929 (2.734244) | 2.273547 / 55.444624 (-53.171077) | 1.909766 / 6.876477 (-4.966711) | 1.968287 / 2.142072 (-0.173785) | 0.834895 / 4.805227 (-3.970332) | 0.165312 / 6.500664 (-6.335352) | 0.062202 / 0.075469 (-0.013267) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.203080 / 1.841788 (-0.638708) | 15.158284 / 8.074308 (7.083976) | 14.174484 / 10.191392 (3.983092) | 0.171540 / 0.680424 (-0.508883) | 0.028604 / 0.534201 (-0.505597) | 0.438379 / 0.579283 (-0.140904) | 0.429447 / 0.434364 (-0.004917) | 0.540979 / 0.540337 (0.000642) | 0.630322 / 1.386936 (-0.756614) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007600 / 0.011353 (-0.003753) | 0.005400 / 0.011008 (-0.005608) | 0.097983 / 0.038508 (0.059475) | 0.033407 / 0.023109 (0.010297) | 0.384429 / 0.275898 (0.108531) | 0.415880 / 0.323480 (0.092400) | 0.006085 / 0.007986 (-0.001900) | 0.004330 / 0.004328 (0.000002) | 0.074654 / 0.004250 (0.070403) | 0.053076 / 0.037052 (0.016024) | 0.383958 / 0.258489 (0.125469) | 0.427289 / 0.293841 (0.133448) | 0.036710 / 0.128546 (-0.091836) | 0.012400 / 0.075646 (-0.063246) | 0.332712 / 0.419271 (-0.086560) | 0.058390 / 0.043533 (0.014857) | 0.377747 / 0.255139 (0.122608) | 0.398997 / 0.283200 (0.115798) | 0.117370 / 0.141683 (-0.024313) | 1.464211 / 1.452155 (0.012057) | 1.596465 / 1.492716 (0.103749) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212989 / 0.018006 (0.194983) | 0.554968 / 0.000490 (0.554479) | 0.004305 / 0.000200 (0.004105) | 0.000116 / 0.000054 (0.000061) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029167 / 0.037411 (-0.008244) | 0.109156 / 0.014526 (0.094631) | 0.122575 / 0.176557 (-0.053982) | 0.163058 / 0.737135 (-0.574077) | 0.127431 / 0.296338 (-0.168908) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.445395 / 0.215209 (0.230185) | 4.447534 / 2.077655 (2.369879) | 2.259186 / 1.504120 (0.755066) | 2.082956 / 1.541195 (0.541761) | 2.259126 / 1.468490 (0.790636) | 0.692271 / 4.584777 (-3.892506) | 3.795759 / 3.745712 (0.050047) | 3.603000 / 5.269862 (-1.666862) | 1.948556 / 4.565676 (-2.617120) | 0.084589 / 0.424275 (-0.339687) | 0.012751 / 0.007607 (0.005144) | 0.544783 / 0.226044 (0.318738) | 5.452278 / 2.268929 (3.183349) | 2.809467 / 55.444624 (-52.635157) | 2.479297 / 6.876477 (-4.397180) | 2.587756 / 2.142072 (0.445683) | 0.832258 / 4.805227 (-3.972970) | 0.167424 / 6.500664 (-6.333240) | 0.066064 / 0.075469 (-0.009405) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.262719 / 1.841788 (-0.579069) | 15.917869 / 8.074308 (7.843561) | 13.879301 / 10.191392 (3.687909) | 0.187712 / 0.680424 (-0.492712) | 0.018175 / 0.534201 (-0.516026) | 0.425840 / 0.579283 (-0.153443) | 0.426164 / 0.434364 (-0.008200) | 0.527465 / 0.540337 (-0.012872) | 0.629478 / 1.386936 (-0.757458) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5f7e178d6373e7d66a60662a22fd60af117f0885 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009064 / 0.011353 (-0.002289) | 0.004824 / 0.011008 (-0.006184) | 0.100869 / 0.038508 (0.062361) | 0.030803 / 0.023109 (0.007694) | 0.350880 / 0.275898 (0.074982) | 0.423816 / 0.323480 (0.100336) | 0.007581 / 0.007986 (-0.000405) | 0.003642 / 0.004328 (-0.000686) | 0.077682 / 0.004250 (0.073432) | 0.039856 / 0.037052 (0.002803) | 0.366097 / 0.258489 (0.107608) | 0.409226 / 0.293841 (0.115385) | 0.033698 / 0.128546 (-0.094848) | 0.011730 / 0.075646 (-0.063916) | 0.321683 / 0.419271 (-0.097588) | 0.041794 / 0.043533 (-0.001739) | 0.351175 / 0.255139 (0.096036) | 0.374328 / 0.283200 (0.091128) | 0.091833 / 0.141683 (-0.049850) | 1.507082 / 1.452155 (0.054927) | 1.543289 / 1.492716 (0.050572) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.010670 / 0.018006 (-0.007337) | 0.429674 / 0.000490 (0.429184) | 0.003246 / 0.000200 (0.003046) | 0.000081 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025015 / 0.037411 (-0.012397) | 0.102155 / 0.014526 (0.087629) | 0.107010 / 0.176557 (-0.069546) | 0.144265 / 0.737135 (-0.592870) | 0.110635 / 0.296338 (-0.185703) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.414211 / 0.215209 (0.199002) | 4.125582 / 2.077655 (2.047928) | 1.997856 / 1.504120 (0.493736) | 1.847676 / 1.541195 (0.306481) | 1.994100 / 1.468490 (0.525610) | 0.694975 / 4.584777 (-3.889802) | 3.373629 / 3.745712 (-0.372083) | 2.863255 / 5.269862 (-2.406606) | 1.565723 / 4.565676 (-2.999953) | 0.082539 / 0.424275 (-0.341736) | 0.012650 / 0.007607 (0.005043) | 0.522989 / 0.226044 (0.296945) | 5.205720 / 2.268929 (2.936792) | 2.352292 / 55.444624 (-53.092332) | 2.080467 / 6.876477 (-4.796010) | 2.231014 / 2.142072 (0.088942) | 0.811252 / 4.805227 (-3.993975) | 0.149171 / 6.500664 (-6.351493) | 0.065207 / 0.075469 (-0.010262) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.203137 / 1.841788 (-0.638651) | 14.244903 / 8.074308 (6.170595) | 14.454368 / 10.191392 (4.262976) | 0.139090 / 0.680424 (-0.541334) | 0.028738 / 0.534201 (-0.505463) | 0.396394 / 0.579283 (-0.182889) | 0.407207 / 0.434364 (-0.027156) | 0.478036 / 0.540337 (-0.062302) | 0.568488 / 1.386936 (-0.818448) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006878 / 0.011353 (-0.004475) | 0.004636 / 0.011008 (-0.006372) | 0.099118 / 0.038508 (0.060610) | 0.028076 / 0.023109 (0.004967) | 0.416097 / 0.275898 (0.140199) | 0.451722 / 0.323480 (0.128242) | 0.005364 / 0.007986 (-0.002622) | 0.003506 / 0.004328 (-0.000822) | 0.075791 / 0.004250 (0.071541) | 0.041373 / 0.037052 (0.004321) | 0.416358 / 0.258489 (0.157869) | 0.458440 / 0.293841 (0.164599) | 0.031870 / 0.128546 (-0.096676) | 0.011751 / 0.075646 (-0.063896) | 0.321748 / 0.419271 (-0.097524) | 0.041780 / 0.043533 (-0.001752) | 0.425037 / 0.255139 (0.169898) | 0.444169 / 0.283200 (0.160969) | 0.093145 / 0.141683 (-0.048538) | 1.472151 / 1.452155 (0.019996) | 1.542942 / 1.492716 (0.050226) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224287 / 0.018006 (0.206281) | 0.415303 / 0.000490 (0.414813) | 0.003180 / 0.000200 (0.002980) | 0.000082 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026377 / 0.037411 (-0.011035) | 0.106222 / 0.014526 (0.091696) | 0.113873 / 0.176557 (-0.062684) | 0.143255 / 0.737135 (-0.593880) | 0.112642 / 0.296338 (-0.183697) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.444149 / 0.215209 (0.228940) | 4.421434 / 2.077655 (2.343779) | 2.082198 / 1.504120 (0.578078) | 1.879909 / 1.541195 (0.338715) | 1.968526 / 1.468490 (0.500036) | 0.697230 / 4.584777 (-3.887546) | 3.430800 / 3.745712 (-0.314912) | 1.893353 / 5.269862 (-3.376509) | 1.173271 / 4.565676 (-3.392406) | 0.082636 / 0.424275 (-0.341639) | 0.012357 / 0.007607 (0.004750) | 0.544008 / 0.226044 (0.317964) | 5.465472 / 2.268929 (3.196543) | 2.530017 / 55.444624 (-52.914608) | 2.178462 / 6.876477 (-4.698014) | 2.279570 / 2.142072 (0.137498) | 0.804890 / 4.805227 (-4.000337) | 0.152091 / 6.500664 (-6.348573) | 0.069442 / 0.075469 (-0.006027) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.256722 / 1.841788 (-0.585065) | 14.554131 / 8.074308 (6.479823) | 13.499913 / 10.191392 (3.308521) | 0.144350 / 0.680424 (-0.536074) | 0.016977 / 0.534201 (-0.517224) | 0.378836 / 0.579283 (-0.200447) | 0.392004 / 0.434364 (-0.042360) | 0.468423 / 0.540337 (-0.071914) | 0.584711 / 1.386936 (-0.802225) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1e4894fcdf2a82b3355bb6a2dc5557c8e23f8144 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008542 / 0.011353 (-0.002811) | 0.004552 / 0.011008 (-0.006456) | 0.100543 / 0.038508 (0.062035) | 0.029717 / 0.023109 (0.006608) | 0.301948 / 0.275898 (0.026050) | 0.360211 / 0.323480 (0.036731) | 0.006881 / 0.007986 (-0.001105) | 0.003433 / 0.004328 (-0.000896) | 0.077760 / 0.004250 (0.073510) | 0.037069 / 0.037052 (0.000017) | 0.314084 / 0.258489 (0.055595) | 0.347759 / 0.293841 (0.053918) | 0.033255 / 0.128546 (-0.095291) | 0.011487 / 0.075646 (-0.064160) | 0.323873 / 0.419271 (-0.095399) | 0.041203 / 0.043533 (-0.002330) | 0.298397 / 0.255139 (0.043258) | 0.327174 / 0.283200 (0.043974) | 0.088892 / 0.141683 (-0.052791) | 1.560114 / 1.452155 (0.107959) | 1.532475 / 1.492716 (0.039759) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.226080 / 0.018006 (0.208074) | 0.467492 / 0.000490 (0.467003) | 0.002198 / 0.000200 (0.001998) | 0.000074 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023627 / 0.037411 (-0.013784) | 0.096696 / 0.014526 (0.082170) | 0.106196 / 0.176557 (-0.070360) | 0.140496 / 0.737135 (-0.596639) | 0.108859 / 0.296338 (-0.187480) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422335 / 0.215209 (0.207126) | 4.214879 / 2.077655 (2.137224) | 1.865866 / 1.504120 (0.361747) | 1.660914 / 1.541195 (0.119719) | 1.691869 / 1.468490 (0.223379) | 0.688164 / 4.584777 (-3.896613) | 3.432708 / 3.745712 (-0.313004) | 1.856852 / 5.269862 (-3.413010) | 1.243685 / 4.565676 (-3.321991) | 0.081552 / 0.424275 (-0.342723) | 0.012491 / 0.007607 (0.004884) | 0.524331 / 0.226044 (0.298287) | 5.255090 / 2.268929 (2.986162) | 2.269705 / 55.444624 (-53.174919) | 1.936722 / 6.876477 (-4.939755) | 2.018958 / 2.142072 (-0.123114) | 0.800658 / 4.805227 (-4.004569) | 0.148665 / 6.500664 (-6.351999) | 0.064210 / 0.075469 (-0.011259) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.235422 / 1.841788 (-0.606365) | 14.156755 / 8.074308 (6.082447) | 14.005916 / 10.191392 (3.814524) | 0.150983 / 0.680424 (-0.529441) | 0.028500 / 0.534201 (-0.505701) | 0.393013 / 0.579283 (-0.186270) | 0.408191 / 0.434364 (-0.026173) | 0.481017 / 0.540337 (-0.059320) | 0.581711 / 1.386936 (-0.805225) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006950 / 0.011353 (-0.004403) | 0.004575 / 0.011008 (-0.006434) | 0.076702 / 0.038508 (0.038194) | 0.028050 / 0.023109 (0.004941) | 0.342916 / 0.275898 (0.067018) | 0.378861 / 0.323480 (0.055381) | 0.005315 / 0.007986 (-0.002671) | 0.004822 / 0.004328 (0.000494) | 0.075560 / 0.004250 (0.071310) | 0.040441 / 0.037052 (0.003388) | 0.344284 / 0.258489 (0.085795) | 0.386519 / 0.293841 (0.092678) | 0.032122 / 0.128546 (-0.096424) | 0.011843 / 0.075646 (-0.063803) | 0.085798 / 0.419271 (-0.333473) | 0.043027 / 0.043533 (-0.000506) | 0.342910 / 0.255139 (0.087771) | 0.366618 / 0.283200 (0.083418) | 0.094766 / 0.141683 (-0.046917) | 1.492981 / 1.452155 (0.040827) | 1.566994 / 1.492716 (0.074278) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.166083 / 0.018006 (0.148076) | 0.409315 / 0.000490 (0.408826) | 0.003189 / 0.000200 (0.002989) | 0.000127 / 0.000054 (0.000072) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024753 / 0.037411 (-0.012658) | 0.099112 / 0.014526 (0.084586) | 0.106668 / 0.176557 (-0.069889) | 0.142562 / 0.737135 (-0.594573) | 0.110648 / 0.296338 (-0.185690) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.452668 / 0.215209 (0.237459) | 4.501188 / 2.077655 (2.423534) | 2.086197 / 1.504120 (0.582077) | 1.873955 / 1.541195 (0.332761) | 1.935610 / 1.468490 (0.467120) | 0.708290 / 4.584777 (-3.876487) | 3.426986 / 3.745712 (-0.318726) | 2.805852 / 5.269862 (-2.464009) | 1.516918 / 4.565676 (-3.048759) | 0.084067 / 0.424275 (-0.340208) | 0.012776 / 0.007607 (0.005169) | 0.548853 / 0.226044 (0.322809) | 5.488198 / 2.268929 (3.219270) | 2.704464 / 55.444624 (-52.740161) | 2.377817 / 6.876477 (-4.498660) | 2.366152 / 2.142072 (0.224079) | 0.818192 / 4.805227 (-3.987035) | 0.152649 / 6.500664 (-6.348015) | 0.066914 / 0.075469 (-0.008555) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.273803 / 1.841788 (-0.567985) | 14.071633 / 8.074308 (5.997325) | 13.655586 / 10.191392 (3.464194) | 0.149471 / 0.680424 (-0.530953) | 0.016745 / 0.534201 (-0.517456) | 0.386850 / 0.579283 (-0.192434) | 0.393595 / 0.434364 (-0.040769) | 0.480396 / 0.540337 (-0.059942) | 0.573708 / 1.386936 (-0.813228) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8b2c7de67b326a635c0dc39ea5dd1ae982c958d6 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008173 / 0.011353 (-0.003180) | 0.004461 / 0.011008 (-0.006547) | 0.100284 / 0.038508 (0.061776) | 0.028900 / 0.023109 (0.005791) | 0.293639 / 0.275898 (0.017741) | 0.359450 / 0.323480 (0.035971) | 0.007567 / 0.007986 (-0.000418) | 0.003434 / 0.004328 (-0.000894) | 0.077913 / 0.004250 (0.073663) | 0.036313 / 0.037052 (-0.000740) | 0.308484 / 0.258489 (0.049995) | 0.347575 / 0.293841 (0.053734) | 0.033367 / 0.128546 (-0.095179) | 0.011508 / 0.075646 (-0.064138) | 0.323490 / 0.419271 (-0.095782) | 0.042285 / 0.043533 (-0.001248) | 0.295696 / 0.255139 (0.040557) | 0.332475 / 0.283200 (0.049276) | 0.089980 / 0.141683 (-0.051703) | 1.461851 / 1.452155 (0.009697) | 1.493030 / 1.492716 (0.000314) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.191068 / 0.018006 (0.173062) | 0.396768 / 0.000490 (0.396278) | 0.002355 / 0.000200 (0.002155) | 0.000080 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023117 / 0.037411 (-0.014294) | 0.096155 / 0.014526 (0.081630) | 0.102424 / 0.176557 (-0.074132) | 0.142148 / 0.737135 (-0.594987) | 0.105954 / 0.296338 (-0.190384) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.421227 / 0.215209 (0.206018) | 4.200403 / 2.077655 (2.122748) | 1.899410 / 1.504120 (0.395290) | 1.684091 / 1.541195 (0.142896) | 1.698084 / 1.468490 (0.229594) | 0.696195 / 4.584777 (-3.888582) | 3.364116 / 3.745712 (-0.381596) | 1.899133 / 5.269862 (-3.370728) | 1.281405 / 4.565676 (-3.284272) | 0.082958 / 0.424275 (-0.341317) | 0.012433 / 0.007607 (0.004826) | 0.521856 / 0.226044 (0.295812) | 5.217626 / 2.268929 (2.948698) | 2.309228 / 55.444624 (-53.135396) | 1.956828 / 6.876477 (-4.919648) | 2.018964 / 2.142072 (-0.123108) | 0.816855 / 4.805227 (-3.988373) | 0.152867 / 6.500664 (-6.347798) | 0.064764 / 0.075469 (-0.010705) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.219020 / 1.841788 (-0.622768) | 13.509058 / 8.074308 (5.434750) | 13.637826 / 10.191392 (3.446434) | 0.156620 / 0.680424 (-0.523804) | 0.028518 / 0.534201 (-0.505683) | 0.399138 / 0.579283 (-0.180146) | 0.399931 / 0.434364 (-0.034433) | 0.482902 / 0.540337 (-0.057435) | 0.574089 / 1.386936 (-0.812847) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006232 / 0.011353 (-0.005121) | 0.004467 / 0.011008 (-0.006542) | 0.075494 / 0.038508 (0.036986) | 0.026891 / 0.023109 (0.003782) | 0.356603 / 0.275898 (0.080705) | 0.371977 / 0.323480 (0.048497) | 0.004709 / 0.007986 (-0.003276) | 0.003230 / 0.004328 (-0.001099) | 0.074338 / 0.004250 (0.070088) | 0.035588 / 0.037052 (-0.001464) | 0.349554 / 0.258489 (0.091065) | 0.389672 / 0.293841 (0.095831) | 0.031524 / 0.128546 (-0.097022) | 0.011493 / 0.075646 (-0.064153) | 0.084584 / 0.419271 (-0.334688) | 0.041945 / 0.043533 (-0.001588) | 0.341057 / 0.255139 (0.085918) | 0.367876 / 0.283200 (0.084677) | 0.090113 / 0.141683 (-0.051569) | 1.507104 / 1.452155 (0.054949) | 1.567810 / 1.492716 (0.075094) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.210939 / 0.018006 (0.192933) | 0.392600 / 0.000490 (0.392110) | 0.002188 / 0.000200 (0.001988) | 0.000073 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024294 / 0.037411 (-0.013118) | 0.100325 / 0.014526 (0.085799) | 0.104027 / 0.176557 (-0.072530) | 0.141189 / 0.737135 (-0.595947) | 0.107438 / 0.296338 (-0.188901) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.443314 / 0.215209 (0.228105) | 4.429612 / 2.077655 (2.351957) | 2.129275 / 1.504120 (0.625156) | 1.940016 / 1.541195 (0.398821) | 2.008975 / 1.468490 (0.540485) | 0.695434 / 4.584777 (-3.889343) | 3.355137 / 3.745712 (-0.390575) | 2.606262 / 5.269862 (-2.663600) | 1.451283 / 4.565676 (-3.114394) | 0.082875 / 0.424275 (-0.341400) | 0.012398 / 0.007607 (0.004791) | 0.544262 / 0.226044 (0.318218) | 5.450829 / 2.268929 (3.181900) | 2.582074 / 55.444624 (-52.862550) | 2.220037 / 6.876477 (-4.656439) | 2.232473 / 2.142072 (0.090401) | 0.802094 / 4.805227 (-4.003134) | 0.150188 / 6.500664 (-6.350476) | 0.066543 / 0.075469 (-0.008926) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.269098 / 1.841788 (-0.572690) | 13.764780 / 8.074308 (5.690472) | 13.461490 / 10.191392 (3.270098) | 0.143841 / 0.680424 (-0.536583) | 0.016687 / 0.534201 (-0.517514) | 0.388548 / 0.579283 (-0.190736) | 0.385229 / 0.434364 (-0.049135) | 0.478966 / 0.540337 (-0.061371) | 0.570355 / 1.386936 (-0.816581) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0ba81f5b299f0918cb0c0c071412feadd0ea3ef5 \"CML watermark\")\n", "I took your comments into account :)\r\n\r\n> Regarding the docs, I think it would be better to add this info as notes/tips/sections to the existing docs (Process/Stream; e.g. a tip under Dataset.shuffle that explains how to make this operation more performant by using to_iterable + shuffle, etc.) rather than introducing a new doc page.\r\n\r\nI added a paragraph in the Dataset.shuffle docstring, and a note in the Process doc page", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010906 / 0.011353 (-0.000447) | 0.005995 / 0.011008 (-0.005014) | 0.120183 / 0.038508 (0.081675) | 0.042166 / 0.023109 (0.019057) | 0.350945 / 0.275898 (0.075046) | 0.433055 / 0.323480 (0.109575) | 0.009093 / 0.007986 (0.001107) | 0.004695 / 0.004328 (0.000366) | 0.090362 / 0.004250 (0.086112) | 0.051402 / 0.037052 (0.014350) | 0.368677 / 0.258489 (0.110188) | 0.410926 / 0.293841 (0.117086) | 0.044471 / 0.128546 (-0.084075) | 0.014051 / 0.075646 (-0.061595) | 0.397765 / 0.419271 (-0.021507) | 0.057227 / 0.043533 (0.013694) | 0.357587 / 0.255139 (0.102448) | 0.377470 / 0.283200 (0.094270) | 0.119482 / 0.141683 (-0.022201) | 1.719799 / 1.452155 (0.267645) | 1.758228 / 1.492716 (0.265511) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224385 / 0.018006 (0.206379) | 0.505070 / 0.000490 (0.504580) | 0.004863 / 0.000200 (0.004663) | 0.000379 / 0.000054 (0.000324) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030366 / 0.037411 (-0.007046) | 0.130481 / 0.014526 (0.115955) | 0.136429 / 0.176557 (-0.040128) | 0.182263 / 0.737135 (-0.554872) | 0.142871 / 0.296338 (-0.153468) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.467623 / 0.215209 (0.252414) | 4.665522 / 2.077655 (2.587868) | 2.130885 / 1.504120 (0.626766) | 1.903810 / 1.541195 (0.362615) | 2.019077 / 1.468490 (0.550587) | 0.820868 / 4.584777 (-3.763909) | 4.543118 / 3.745712 (0.797406) | 2.491541 / 5.269862 (-2.778321) | 1.585377 / 4.565676 (-2.980299) | 0.101850 / 0.424275 (-0.322426) | 0.014737 / 0.007607 (0.007129) | 0.597241 / 0.226044 (0.371197) | 5.938445 / 2.268929 (3.669516) | 2.695799 / 55.444624 (-52.748825) | 2.286890 / 6.876477 (-4.589587) | 2.363064 / 2.142072 (0.220991) | 0.986670 / 4.805227 (-3.818557) | 0.194407 / 6.500664 (-6.306257) | 0.074767 / 0.075469 (-0.000702) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.420630 / 1.841788 (-0.421158) | 17.537702 / 8.074308 (9.463394) | 16.521804 / 10.191392 (6.330412) | 0.173622 / 0.680424 (-0.506802) | 0.033944 / 0.534201 (-0.500257) | 0.520461 / 0.579283 (-0.058822) | 0.541283 / 0.434364 (0.106919) | 0.651906 / 0.540337 (0.111569) | 0.771724 / 1.386936 (-0.615212) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008448 / 0.011353 (-0.002905) | 0.005893 / 0.011008 (-0.005115) | 0.087995 / 0.038508 (0.049487) | 0.038602 / 0.023109 (0.015493) | 0.400048 / 0.275898 (0.124150) | 0.436998 / 0.323480 (0.113518) | 0.006414 / 0.007986 (-0.001572) | 0.004478 / 0.004328 (0.000149) | 0.086444 / 0.004250 (0.082194) | 0.056535 / 0.037052 (0.019483) | 0.402066 / 0.258489 (0.143577) | 0.458730 / 0.293841 (0.164889) | 0.041622 / 0.128546 (-0.086924) | 0.014014 / 0.075646 (-0.061632) | 0.101382 / 0.419271 (-0.317889) | 0.056986 / 0.043533 (0.013453) | 0.404527 / 0.255139 (0.149388) | 0.428105 / 0.283200 (0.144906) | 0.118321 / 0.141683 (-0.023361) | 1.716940 / 1.452155 (0.264785) | 1.834683 / 1.492716 (0.341967) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.252917 / 0.018006 (0.234910) | 0.485950 / 0.000490 (0.485461) | 0.000489 / 0.000200 (0.000289) | 0.000066 / 0.000054 (0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035023 / 0.037411 (-0.002388) | 0.139055 / 0.014526 (0.124529) | 0.144165 / 0.176557 (-0.032392) | 0.189559 / 0.737135 (-0.547577) | 0.153213 / 0.296338 (-0.143126) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.505069 / 0.215209 (0.289860) | 5.024620 / 2.077655 (2.946965) | 2.429469 / 1.504120 (0.925349) | 2.186210 / 1.541195 (0.645015) | 2.275971 / 1.468490 (0.807481) | 0.829432 / 4.584777 (-3.755345) | 4.518600 / 3.745712 (0.772888) | 2.466418 / 5.269862 (-2.803443) | 1.558910 / 4.565676 (-3.006767) | 0.102017 / 0.424275 (-0.322258) | 0.015191 / 0.007607 (0.007584) | 0.619092 / 0.226044 (0.393048) | 6.241105 / 2.268929 (3.972176) | 3.044213 / 55.444624 (-52.400411) | 2.630194 / 6.876477 (-4.246282) | 2.723685 / 2.142072 (0.581613) | 0.994018 / 4.805227 (-3.811210) | 0.198722 / 6.500664 (-6.301942) | 0.075812 / 0.075469 (0.000343) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.545497 / 1.841788 (-0.296291) | 18.305250 / 8.074308 (10.230942) | 16.035275 / 10.191392 (5.843883) | 0.209339 / 0.680424 (-0.471085) | 0.020903 / 0.534201 (-0.513298) | 0.499909 / 0.579283 (-0.079374) | 0.488775 / 0.434364 (0.054411) | 0.581990 / 0.540337 (0.041653) | 0.697786 / 1.386936 (-0.689150) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#78dca62e8aaddb9e0cf0212841f2c8d861fe74c8 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011706 / 0.011353 (0.000353) | 0.008406 / 0.011008 (-0.002602) | 0.130887 / 0.038508 (0.092379) | 0.037468 / 0.023109 (0.014359) | 0.385043 / 0.275898 (0.109145) | 0.458837 / 0.323480 (0.135357) | 0.013400 / 0.007986 (0.005414) | 0.004885 / 0.004328 (0.000557) | 0.107156 / 0.004250 (0.102905) | 0.046958 / 0.037052 (0.009906) | 0.419314 / 0.258489 (0.160825) | 0.456061 / 0.293841 (0.162220) | 0.058859 / 0.128546 (-0.069687) | 0.016682 / 0.075646 (-0.058965) | 0.428401 / 0.419271 (0.009129) | 0.062908 / 0.043533 (0.019376) | 0.370902 / 0.255139 (0.115763) | 0.433897 / 0.283200 (0.150697) | 0.125672 / 0.141683 (-0.016011) | 1.818279 / 1.452155 (0.366124) | 1.935767 / 1.492716 (0.443050) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.011928 / 0.018006 (-0.006078) | 0.591995 / 0.000490 (0.591506) | 0.008416 / 0.000200 (0.008216) | 0.000122 / 0.000054 (0.000067) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029640 / 0.037411 (-0.007772) | 0.121044 / 0.014526 (0.106518) | 0.141840 / 0.176557 (-0.034716) | 0.195856 / 0.737135 (-0.541280) | 0.146460 / 0.296338 (-0.149879) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.591838 / 0.215209 (0.376629) | 5.817309 / 2.077655 (3.739654) | 2.411864 / 1.504120 (0.907744) | 2.098517 / 1.541195 (0.557323) | 2.214609 / 1.468490 (0.746119) | 1.217542 / 4.584777 (-3.367235) | 5.658394 / 3.745712 (1.912682) | 5.155807 / 5.269862 (-0.114055) | 2.797313 / 4.565676 (-1.768363) | 0.141309 / 0.424275 (-0.282967) | 0.014462 / 0.007607 (0.006855) | 0.772274 / 0.226044 (0.546230) | 7.547357 / 2.268929 (5.278429) | 3.150178 / 55.444624 (-52.294446) | 2.500130 / 6.876477 (-4.376347) | 2.572036 / 2.142072 (0.429964) | 1.434498 / 4.805227 (-3.370729) | 0.257355 / 6.500664 (-6.243309) | 0.087491 / 0.075469 (0.012022) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.483899 / 1.841788 (-0.357889) | 17.990741 / 8.074308 (9.916433) | 20.398965 / 10.191392 (10.207573) | 0.239529 / 0.680424 (-0.440895) | 0.046118 / 0.534201 (-0.488083) | 0.528349 / 0.579283 (-0.050934) | 0.614333 / 0.434364 (0.179969) | 0.653621 / 0.540337 (0.113284) | 0.794654 / 1.386936 (-0.592282) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008732 / 0.011353 (-0.002621) | 0.006432 / 0.011008 (-0.004576) | 0.090811 / 0.038508 (0.052303) | 0.030154 / 0.023109 (0.007045) | 0.407885 / 0.275898 (0.131987) | 0.452457 / 0.323480 (0.128977) | 0.006966 / 0.007986 (-0.001020) | 0.006449 / 0.004328 (0.002120) | 0.094439 / 0.004250 (0.090188) | 0.050628 / 0.037052 (0.013576) | 0.401815 / 0.258489 (0.143326) | 0.451814 / 0.293841 (0.157973) | 0.047456 / 0.128546 (-0.081090) | 0.019019 / 0.075646 (-0.056628) | 0.112941 / 0.419271 (-0.306331) | 0.057677 / 0.043533 (0.014145) | 0.406160 / 0.255139 (0.151021) | 0.434469 / 0.283200 (0.151269) | 0.110515 / 0.141683 (-0.031167) | 1.601393 / 1.452155 (0.149238) | 1.745581 / 1.492716 (0.252865) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.280264 / 0.018006 (0.262258) | 0.630074 / 0.000490 (0.629585) | 0.006900 / 0.000200 (0.006700) | 0.000112 / 0.000054 (0.000058) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027338 / 0.037411 (-0.010073) | 0.114772 / 0.014526 (0.100246) | 0.130436 / 0.176557 (-0.046121) | 0.168990 / 0.737135 (-0.568145) | 0.135842 / 0.296338 (-0.160496) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.666739 / 0.215209 (0.451530) | 6.212953 / 2.077655 (4.135298) | 2.781716 / 1.504120 (1.277596) | 2.369975 / 1.541195 (0.828781) | 2.338807 / 1.468490 (0.870317) | 1.174138 / 4.584777 (-3.410639) | 5.420297 / 3.745712 (1.674585) | 4.972669 / 5.269862 (-0.297192) | 2.214294 / 4.565676 (-2.351382) | 0.135429 / 0.424275 (-0.288846) | 0.013877 / 0.007607 (0.006270) | 0.750805 / 0.226044 (0.524761) | 7.145429 / 2.268929 (4.876500) | 3.215081 / 55.444624 (-52.229544) | 2.598307 / 6.876477 (-4.278170) | 2.690479 / 2.142072 (0.548406) | 1.344673 / 4.805227 (-3.460554) | 0.241536 / 6.500664 (-6.259128) | 0.075544 / 0.075469 (0.000074) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.473595 / 1.841788 (-0.368192) | 17.372237 / 8.074308 (9.297929) | 18.586588 / 10.191392 (8.395196) | 0.209300 / 0.680424 (-0.471124) | 0.030878 / 0.534201 (-0.503323) | 0.509131 / 0.579283 (-0.070152) | 0.617884 / 0.434364 (0.183520) | 0.633721 / 0.540337 (0.093383) | 0.727624 / 1.386936 (-0.659312) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#87f2062d47fdbec3fadf5b39bab0801f59c0f4a3 \"CML watermark\")\n", "Took your last comments into account !\r\n\r\n> so maybe a better title for it would be \"Optimize processing\" (or \"Working with datasets at scale\" as I mentioned earlier on Slack)\r\n\r\nI think the content would be slightly different, e.g. focus more on multiprocessing/sharding or what data formats to use. This can be a complementary page IMO\r\n\r\n> PS: I think it would be a good idea to add links to the Guide pages for better discoverability and to somewhat \"justify their presence in the docs\" (from the tutorial/how-to pages to the guides; some guides are not referenced at all)\r\n\r\nAdded a link in the how-to stream page. We may want to include it in the tutorial at one point at well - right now none of the tutorials mention streaming", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009167 / 0.011353 (-0.002186) | 0.005345 / 0.011008 (-0.005663) | 0.098302 / 0.038508 (0.059794) | 0.035649 / 0.023109 (0.012540) | 0.295597 / 0.275898 (0.019699) | 0.358843 / 0.323480 (0.035364) | 0.008011 / 0.007986 (0.000025) | 0.004229 / 0.004328 (-0.000100) | 0.075123 / 0.004250 (0.070872) | 0.046098 / 0.037052 (0.009046) | 0.310581 / 0.258489 (0.052092) | 0.343230 / 0.293841 (0.049389) | 0.038318 / 0.128546 (-0.090229) | 0.011954 / 0.075646 (-0.063693) | 0.331056 / 0.419271 (-0.088216) | 0.052875 / 0.043533 (0.009342) | 0.302758 / 0.255139 (0.047619) | 0.340596 / 0.283200 (0.057396) | 0.113676 / 0.141683 (-0.028007) | 1.448272 / 1.452155 (-0.003883) | 1.498008 / 1.492716 (0.005291) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.240524 / 0.018006 (0.222518) | 0.555823 / 0.000490 (0.555333) | 0.003143 / 0.000200 (0.002943) | 0.000098 / 0.000054 (0.000044) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027764 / 0.037411 (-0.009647) | 0.105006 / 0.014526 (0.090480) | 0.120550 / 0.176557 (-0.056007) | 0.167052 / 0.737135 (-0.570084) | 0.124521 / 0.296338 (-0.171818) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.401758 / 0.215209 (0.186549) | 3.989629 / 2.077655 (1.911974) | 1.767307 / 1.504120 (0.263187) | 1.579451 / 1.541195 (0.038257) | 1.637642 / 1.468490 (0.169152) | 0.702524 / 4.584777 (-3.882253) | 3.714326 / 3.745712 (-0.031386) | 2.131829 / 5.269862 (-3.138033) | 1.487410 / 4.565676 (-3.078267) | 0.084901 / 0.424275 (-0.339374) | 0.012292 / 0.007607 (0.004685) | 0.505211 / 0.226044 (0.279166) | 5.074479 / 2.268929 (2.805551) | 2.243068 / 55.444624 (-53.201556) | 1.880199 / 6.876477 (-4.996278) | 2.003757 / 2.142072 (-0.138315) | 0.870719 / 4.805227 (-3.934508) | 0.167626 / 6.500664 (-6.333039) | 0.062024 / 0.075469 (-0.013445) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.192969 / 1.841788 (-0.648819) | 14.830812 / 8.074308 (6.756504) | 14.331178 / 10.191392 (4.139786) | 0.199222 / 0.680424 (-0.481202) | 0.029292 / 0.534201 (-0.504909) | 0.440427 / 0.579283 (-0.138857) | 0.437893 / 0.434364 (0.003529) | 0.547155 / 0.540337 (0.006818) | 0.645255 / 1.386936 (-0.741681) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007465 / 0.011353 (-0.003888) | 0.005386 / 0.011008 (-0.005622) | 0.073609 / 0.038508 (0.035100) | 0.033550 / 0.023109 (0.010440) | 0.341730 / 0.275898 (0.065832) | 0.371518 / 0.323480 (0.048038) | 0.005986 / 0.007986 (-0.001999) | 0.004264 / 0.004328 (-0.000065) | 0.073749 / 0.004250 (0.069498) | 0.051452 / 0.037052 (0.014399) | 0.347385 / 0.258489 (0.088896) | 0.392284 / 0.293841 (0.098444) | 0.036981 / 0.128546 (-0.091566) | 0.012431 / 0.075646 (-0.063216) | 0.086421 / 0.419271 (-0.332850) | 0.053014 / 0.043533 (0.009481) | 0.336660 / 0.255139 (0.081521) | 0.359155 / 0.283200 (0.075956) | 0.107666 / 0.141683 (-0.034017) | 1.424324 / 1.452155 (-0.027830) | 1.543027 / 1.492716 (0.050310) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.260862 / 0.018006 (0.242855) | 0.552057 / 0.000490 (0.551567) | 0.000449 / 0.000200 (0.000249) | 0.000059 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029184 / 0.037411 (-0.008227) | 0.108799 / 0.014526 (0.094274) | 0.125136 / 0.176557 (-0.051421) | 0.157436 / 0.737135 (-0.579699) | 0.126333 / 0.296338 (-0.170005) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.424054 / 0.215209 (0.208845) | 4.227847 / 2.077655 (2.150192) | 2.051102 / 1.504120 (0.546983) | 1.848651 / 1.541195 (0.307457) | 1.922728 / 1.468490 (0.454238) | 0.705903 / 4.584777 (-3.878874) | 3.800977 / 3.745712 (0.055265) | 2.099345 / 5.269862 (-3.170517) | 1.342919 / 4.565676 (-3.222757) | 0.086128 / 0.424275 (-0.338147) | 0.012539 / 0.007607 (0.004932) | 0.528767 / 0.226044 (0.302723) | 5.299989 / 2.268929 (3.031061) | 2.534280 / 55.444624 (-52.910345) | 2.229532 / 6.876477 (-4.646945) | 2.326704 / 2.142072 (0.184632) | 0.838533 / 4.805227 (-3.966694) | 0.168446 / 6.500664 (-6.332218) | 0.065158 / 0.075469 (-0.010311) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.250091 / 1.841788 (-0.591697) | 14.988651 / 8.074308 (6.914343) | 13.655103 / 10.191392 (3.463711) | 0.165079 / 0.680424 (-0.515345) | 0.017829 / 0.534201 (-0.516372) | 0.425903 / 0.579283 (-0.153381) | 0.419771 / 0.434364 (-0.014593) | 0.534309 / 0.540337 (-0.006028) | 0.635563 / 1.386936 (-0.751373) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f7d17ccc9b9dde2d94803b1305226c5a58d916c5 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010569 / 0.011353 (-0.000784) | 0.005790 / 0.011008 (-0.005218) | 0.118626 / 0.038508 (0.080118) | 0.040455 / 0.023109 (0.017346) | 0.342309 / 0.275898 (0.066411) | 0.411828 / 0.323480 (0.088349) | 0.008824 / 0.007986 (0.000839) | 0.005426 / 0.004328 (0.001098) | 0.088740 / 0.004250 (0.084489) | 0.050042 / 0.037052 (0.012990) | 0.352350 / 0.258489 (0.093861) | 0.396030 / 0.293841 (0.102189) | 0.043385 / 0.128546 (-0.085162) | 0.013805 / 0.075646 (-0.061841) | 0.396489 / 0.419271 (-0.022783) | 0.055667 / 0.043533 (0.012135) | 0.336165 / 0.255139 (0.081026) | 0.372912 / 0.283200 (0.089713) | 0.115343 / 0.141683 (-0.026340) | 1.656412 / 1.452155 (0.204257) | 1.708993 / 1.492716 (0.216277) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.011650 / 0.018006 (-0.006357) | 0.444415 / 0.000490 (0.443926) | 0.003985 / 0.000200 (0.003785) | 0.000136 / 0.000054 (0.000082) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031718 / 0.037411 (-0.005693) | 0.119640 / 0.014526 (0.105114) | 0.138519 / 0.176557 (-0.038037) | 0.188847 / 0.737135 (-0.548288) | 0.137891 / 0.296338 (-0.158448) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.447540 / 0.215209 (0.232331) | 4.577189 / 2.077655 (2.499534) | 2.106992 / 1.504120 (0.602872) | 1.889631 / 1.541195 (0.348436) | 1.972256 / 1.468490 (0.503766) | 0.778209 / 4.584777 (-3.806568) | 4.430279 / 3.745712 (0.684567) | 2.401226 / 5.269862 (-2.868636) | 1.481251 / 4.565676 (-3.084425) | 0.094244 / 0.424275 (-0.330031) | 0.013961 / 0.007607 (0.006354) | 0.570962 / 0.226044 (0.344917) | 5.809224 / 2.268929 (3.540295) | 2.663290 / 55.444624 (-52.781334) | 2.201228 / 6.876477 (-4.675249) | 2.319240 / 2.142072 (0.177168) | 0.938340 / 4.805227 (-3.866887) | 0.185546 / 6.500664 (-6.315118) | 0.069087 / 0.075469 (-0.006382) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.448597 / 1.841788 (-0.393191) | 17.188573 / 8.074308 (9.114265) | 16.197532 / 10.191392 (6.006140) | 0.194064 / 0.680424 (-0.486360) | 0.033694 / 0.534201 (-0.500507) | 0.507585 / 0.579283 (-0.071699) | 0.505470 / 0.434364 (0.071106) | 0.623270 / 0.540337 (0.082932) | 0.729964 / 1.386936 (-0.656972) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008529 / 0.011353 (-0.002824) | 0.005705 / 0.011008 (-0.005304) | 0.085594 / 0.038508 (0.047086) | 0.038377 / 0.023109 (0.015268) | 0.384221 / 0.275898 (0.108323) | 0.414678 / 0.323480 (0.091199) | 0.006195 / 0.007986 (-0.001791) | 0.004549 / 0.004328 (0.000221) | 0.082710 / 0.004250 (0.078460) | 0.054899 / 0.037052 (0.017847) | 0.404017 / 0.258489 (0.145528) | 0.450309 / 0.293841 (0.156468) | 0.040620 / 0.128546 (-0.087926) | 0.013774 / 0.075646 (-0.061872) | 0.099231 / 0.419271 (-0.320041) | 0.057183 / 0.043533 (0.013650) | 0.390806 / 0.255139 (0.135667) | 0.419334 / 0.283200 (0.136134) | 0.116449 / 0.141683 (-0.025234) | 1.709124 / 1.452155 (0.256969) | 1.812769 / 1.492716 (0.320052) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.225206 / 0.018006 (0.207199) | 0.440530 / 0.000490 (0.440040) | 0.002982 / 0.000200 (0.002782) | 0.000102 / 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032256 / 0.037411 (-0.005155) | 0.127086 / 0.014526 (0.112560) | 0.138133 / 0.176557 (-0.038424) | 0.176168 / 0.737135 (-0.560968) | 0.146072 / 0.296338 (-0.150267) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.474374 / 0.215209 (0.259165) | 4.785106 / 2.077655 (2.707452) | 2.319344 / 1.504120 (0.815225) | 2.075239 / 1.541195 (0.534045) | 2.179231 / 1.468490 (0.710741) | 0.832124 / 4.584777 (-3.752653) | 4.376302 / 3.745712 (0.630590) | 3.966837 / 5.269862 (-1.303024) | 1.820230 / 4.565676 (-2.745446) | 0.100692 / 0.424275 (-0.323583) | 0.014748 / 0.007607 (0.007141) | 0.568702 / 0.226044 (0.342657) | 5.771548 / 2.268929 (3.502619) | 2.747431 / 55.444624 (-52.697193) | 2.448482 / 6.876477 (-4.427994) | 2.497206 / 2.142072 (0.355133) | 0.960842 / 4.805227 (-3.844385) | 0.192855 / 6.500664 (-6.307809) | 0.072494 / 0.075469 (-0.002975) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.474542 / 1.841788 (-0.367245) | 17.344804 / 8.074308 (9.270496) | 15.336082 / 10.191392 (5.144690) | 0.200134 / 0.680424 (-0.480290) | 0.020728 / 0.534201 (-0.513473) | 0.488854 / 0.579283 (-0.090429) | 0.490781 / 0.434364 (0.056418) | 0.626288 / 0.540337 (0.085950) | 0.721130 / 1.386936 (-0.665806) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#cd7877892aa48a2470b01f52013390c54aca8a49 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008542 / 0.011353 (-0.002811) | 0.004624 / 0.011008 (-0.006384) | 0.100749 / 0.038508 (0.062241) | 0.029587 / 0.023109 (0.006478) | 0.298680 / 0.275898 (0.022782) | 0.359659 / 0.323480 (0.036180) | 0.007001 / 0.007986 (-0.000984) | 0.003398 / 0.004328 (-0.000930) | 0.078654 / 0.004250 (0.074404) | 0.036440 / 0.037052 (-0.000612) | 0.313245 / 0.258489 (0.054756) | 0.342776 / 0.293841 (0.048936) | 0.033195 / 0.128546 (-0.095352) | 0.011500 / 0.075646 (-0.064146) | 0.323957 / 0.419271 (-0.095314) | 0.039878 / 0.043533 (-0.003655) | 0.298189 / 0.255139 (0.043050) | 0.325488 / 0.283200 (0.042289) | 0.087276 / 0.141683 (-0.054407) | 1.480846 / 1.452155 (0.028691) | 1.507016 / 1.492716 (0.014300) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.189570 / 0.018006 (0.171564) | 0.406407 / 0.000490 (0.405917) | 0.003062 / 0.000200 (0.002862) | 0.000073 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022865 / 0.037411 (-0.014546) | 0.096103 / 0.014526 (0.081578) | 0.106462 / 0.176557 (-0.070094) | 0.140888 / 0.737135 (-0.596247) | 0.108172 / 0.296338 (-0.188167) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.415951 / 0.215209 (0.200742) | 4.172187 / 2.077655 (2.094532) | 1.842210 / 1.504120 (0.338090) | 1.636997 / 1.541195 (0.095802) | 1.706078 / 1.468490 (0.237588) | 0.695825 / 4.584777 (-3.888952) | 3.337354 / 3.745712 (-0.408358) | 1.877880 / 5.269862 (-3.391982) | 1.153882 / 4.565676 (-3.411794) | 0.082923 / 0.424275 (-0.341352) | 0.012814 / 0.007607 (0.005207) | 0.521793 / 0.226044 (0.295748) | 5.275980 / 2.268929 (3.007051) | 2.279230 / 55.444624 (-53.165394) | 1.941777 / 6.876477 (-4.934700) | 1.981297 / 2.142072 (-0.160775) | 0.809669 / 4.805227 (-3.995558) | 0.148753 / 6.500664 (-6.351911) | 0.064909 / 0.075469 (-0.010560) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.226757 / 1.841788 (-0.615031) | 13.717354 / 8.074308 (5.643046) | 12.925885 / 10.191392 (2.734493) | 0.137926 / 0.680424 (-0.542498) | 0.028788 / 0.534201 (-0.505413) | 0.396654 / 0.579283 (-0.182630) | 0.401931 / 0.434364 (-0.032432) | 0.460515 / 0.540337 (-0.079823) | 0.537903 / 1.386936 (-0.849033) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006757 / 0.011353 (-0.004596) | 0.004474 / 0.011008 (-0.006534) | 0.076571 / 0.038508 (0.038063) | 0.027580 / 0.023109 (0.004471) | 0.348231 / 0.275898 (0.072333) | 0.398403 / 0.323480 (0.074923) | 0.005089 / 0.007986 (-0.002897) | 0.004676 / 0.004328 (0.000347) | 0.076444 / 0.004250 (0.072194) | 0.038508 / 0.037052 (0.001456) | 0.348515 / 0.258489 (0.090026) | 0.401456 / 0.293841 (0.107615) | 0.031630 / 0.128546 (-0.096916) | 0.011698 / 0.075646 (-0.063949) | 0.085805 / 0.419271 (-0.333467) | 0.041962 / 0.043533 (-0.001570) | 0.343415 / 0.255139 (0.088276) | 0.383001 / 0.283200 (0.099801) | 0.090231 / 0.141683 (-0.051452) | 1.488114 / 1.452155 (0.035960) | 1.569039 / 1.492716 (0.076323) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.261751 / 0.018006 (0.243745) | 0.411354 / 0.000490 (0.410865) | 0.015103 / 0.000200 (0.014903) | 0.000262 / 0.000054 (0.000208) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025423 / 0.037411 (-0.011988) | 0.101334 / 0.014526 (0.086808) | 0.108835 / 0.176557 (-0.067722) | 0.143995 / 0.737135 (-0.593140) | 0.111751 / 0.296338 (-0.184588) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.446507 / 0.215209 (0.231298) | 4.461543 / 2.077655 (2.383888) | 2.104648 / 1.504120 (0.600528) | 1.895900 / 1.541195 (0.354706) | 1.985481 / 1.468490 (0.516991) | 0.699029 / 4.584777 (-3.885748) | 3.371064 / 3.745712 (-0.374648) | 1.883445 / 5.269862 (-3.386416) | 1.166150 / 4.565676 (-3.399527) | 0.082639 / 0.424275 (-0.341636) | 0.012605 / 0.007607 (0.004998) | 0.544860 / 0.226044 (0.318815) | 5.513223 / 2.268929 (3.244294) | 2.570661 / 55.444624 (-52.873963) | 2.206066 / 6.876477 (-4.670411) | 2.256346 / 2.142072 (0.114273) | 0.801142 / 4.805227 (-4.004085) | 0.150412 / 6.500664 (-6.350252) | 0.067742 / 0.075469 (-0.007727) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.303477 / 1.841788 (-0.538310) | 14.287767 / 8.074308 (6.213458) | 13.525563 / 10.191392 (3.334171) | 0.148202 / 0.680424 (-0.532222) | 0.016868 / 0.534201 (-0.517333) | 0.380729 / 0.579283 (-0.198555) | 0.388177 / 0.434364 (-0.046187) | 0.477410 / 0.540337 (-0.062927) | 0.569343 / 1.386936 (-0.817593) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#79c18b77113da3f2e31af0570ec119877ca2a390 \"CML watermark\")\n", "> PS: I think it would be a good idea to add links to the Guide pages for better discoverability and to somewhat \"justify their presence in the docs\" (from the tutorial/how-to pages to the guides; some guides are not referenced at all)\r\n\r\nJust merged #5485, which references this new doc! Will look for other pages in the docs where it'd make sense to add them :)" ]
2023-01-05T18:12:17Z
2023-02-01T18:11:45Z
2023-02-01T16:36:01Z
MEMBER
null
null
null
Added `ds.to_iterable()` to get an iterable dataset from a map-style arrow dataset. It also has a `num_shards` argument to split the dataset before converting to an iterable dataset. Sharding is important to enable efficient shuffling and parallel loading of iterable datasets. TODO: - [x] tests - [x] docs Fix https://github.com/huggingface/datasets/issues/5265
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5410/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5410/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5410.diff", "html_url": "https://github.com/huggingface/datasets/pull/5410", "merged_at": "2023-02-01T16:36:01Z", "patch_url": "https://github.com/huggingface/datasets/pull/5410.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5410" }
https://api.github.com/repos/huggingface/datasets/issues/5470
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5470/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5470/comments
https://api.github.com/repos/huggingface/datasets/issues/5470/events
https://github.com/huggingface/datasets/pull/5470
1,558,542,611
PR_kwDODunzps5InLw9
5,470
Update dataset card creation
{ "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stevhliu", "id": 59462357, "login": "stevhliu", "node_id": "MDQ6VXNlcjU5NDYyMzU3", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "repos_url": "https://api.github.com/users/stevhliu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "type": "User", "url": "https://api.github.com/users/stevhliu", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "The CI failure is unrelated to your PR - feel free to merge :)", "Haha thanks, you read my mind :)", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008332 / 0.011353 (-0.003021) | 0.004556 / 0.011008 (-0.006452) | 0.102239 / 0.038508 (0.063731) | 0.029332 / 0.023109 (0.006222) | 0.296189 / 0.275898 (0.020291) | 0.355746 / 0.323480 (0.032266) | 0.007705 / 0.007986 (-0.000281) | 0.003488 / 0.004328 (-0.000840) | 0.079142 / 0.004250 (0.074891) | 0.034980 / 0.037052 (-0.002073) | 0.307460 / 0.258489 (0.048971) | 0.345944 / 0.293841 (0.052103) | 0.033815 / 0.128546 (-0.094731) | 0.011603 / 0.075646 (-0.064044) | 0.322097 / 0.419271 (-0.097175) | 0.043753 / 0.043533 (0.000220) | 0.296706 / 0.255139 (0.041567) | 0.323195 / 0.283200 (0.039996) | 0.092295 / 0.141683 (-0.049388) | 1.542556 / 1.452155 (0.090401) | 1.571896 / 1.492716 (0.079180) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.191075 / 0.018006 (0.173069) | 0.407394 / 0.000490 (0.406905) | 0.002033 / 0.000200 (0.001833) | 0.000073 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023175 / 0.037411 (-0.014236) | 0.094774 / 0.014526 (0.080248) | 0.105782 / 0.176557 (-0.070775) | 0.146608 / 0.737135 (-0.590528) | 0.107519 / 0.296338 (-0.188819) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.421516 / 0.215209 (0.206306) | 4.201091 / 2.077655 (2.123436) | 1.880285 / 1.504120 (0.376165) | 1.676333 / 1.541195 (0.135139) | 1.734301 / 1.468490 (0.265811) | 0.688504 / 4.584777 (-3.896273) | 3.370289 / 3.745712 (-0.375423) | 3.127661 / 5.269862 (-2.142201) | 1.562570 / 4.565676 (-3.003106) | 0.081687 / 0.424275 (-0.342588) | 0.012334 / 0.007607 (0.004727) | 0.524125 / 0.226044 (0.298080) | 5.245595 / 2.268929 (2.976667) | 2.332622 / 55.444624 (-53.112002) | 1.973212 / 6.876477 (-4.903265) | 2.006507 / 2.142072 (-0.135565) | 0.807126 / 4.805227 (-3.998101) | 0.148254 / 6.500664 (-6.352411) | 0.064240 / 0.075469 (-0.011229) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.206880 / 1.841788 (-0.634907) | 13.854877 / 8.074308 (5.780569) | 13.806772 / 10.191392 (3.615380) | 0.144380 / 0.680424 (-0.536044) | 0.028492 / 0.534201 (-0.505709) | 0.393854 / 0.579283 (-0.185429) | 0.402210 / 0.434364 (-0.032154) | 0.462138 / 0.540337 (-0.078199) | 0.537480 / 1.386936 (-0.849456) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006692 / 0.011353 (-0.004661) | 0.004529 / 0.011008 (-0.006479) | 0.077925 / 0.038508 (0.039417) | 0.027824 / 0.023109 (0.004715) | 0.342288 / 0.275898 (0.066390) | 0.375071 / 0.323480 (0.051591) | 0.004889 / 0.007986 (-0.003097) | 0.003353 / 0.004328 (-0.000975) | 0.076198 / 0.004250 (0.071947) | 0.037797 / 0.037052 (0.000744) | 0.347834 / 0.258489 (0.089345) | 0.384200 / 0.293841 (0.090359) | 0.032184 / 0.128546 (-0.096362) | 0.011674 / 0.075646 (-0.063972) | 0.086242 / 0.419271 (-0.333029) | 0.044465 / 0.043533 (0.000932) | 0.341712 / 0.255139 (0.086573) | 0.366908 / 0.283200 (0.083709) | 0.091526 / 0.141683 (-0.050156) | 1.495798 / 1.452155 (0.043643) | 1.571700 / 1.492716 (0.078984) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221962 / 0.018006 (0.203955) | 0.393095 / 0.000490 (0.392605) | 0.000385 / 0.000200 (0.000185) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024365 / 0.037411 (-0.013046) | 0.099278 / 0.014526 (0.084753) | 0.105940 / 0.176557 (-0.070617) | 0.141334 / 0.737135 (-0.595802) | 0.110898 / 0.296338 (-0.185440) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.446150 / 0.215209 (0.230941) | 4.471441 / 2.077655 (2.393786) | 2.124864 / 1.504120 (0.620744) | 1.909950 / 1.541195 (0.368755) | 1.970085 / 1.468490 (0.501595) | 0.706711 / 4.584777 (-3.878066) | 3.380336 / 3.745712 (-0.365376) | 1.866106 / 5.269862 (-3.403756) | 1.160657 / 4.565676 (-3.405019) | 0.082786 / 0.424275 (-0.341489) | 0.012470 / 0.007607 (0.004862) | 0.537620 / 0.226044 (0.311575) | 5.390588 / 2.268929 (3.121659) | 2.539137 / 55.444624 (-52.905488) | 2.191867 / 6.876477 (-4.684610) | 2.236212 / 2.142072 (0.094139) | 0.810756 / 4.805227 (-3.994471) | 0.150933 / 6.500664 (-6.349731) | 0.066141 / 0.075469 (-0.009328) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.271595 / 1.841788 (-0.570193) | 13.840013 / 8.074308 (5.765705) | 13.334443 / 10.191392 (3.143051) | 0.150096 / 0.680424 (-0.530328) | 0.016919 / 0.534201 (-0.517282) | 0.375534 / 0.579283 (-0.203749) | 0.387203 / 0.434364 (-0.047161) | 0.463500 / 0.540337 (-0.076838) | 0.553496 / 1.386936 (-0.833440) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5f2e47230c13f977bcebdc4380623f59da67a75f \"CML watermark\")\n" ]
2023-01-26T17:57:51Z
2023-01-27T16:27:00Z
2023-01-27T16:20:10Z
MEMBER
null
null
null
Encourages users to create a dataset card on the Hub directly with the new metadata ui + import dataset card template instead of telling users to manually create and upload one.
{ "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stevhliu", "id": 59462357, "login": "stevhliu", "node_id": "MDQ6VXNlcjU5NDYyMzU3", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "repos_url": "https://api.github.com/users/stevhliu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "type": "User", "url": "https://api.github.com/users/stevhliu", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5470/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5470/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5470.diff", "html_url": "https://github.com/huggingface/datasets/pull/5470", "merged_at": "2023-01-27T16:20:10Z", "patch_url": "https://github.com/huggingface/datasets/pull/5470.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5470" }
https://api.github.com/repos/huggingface/datasets/issues/6603
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6603/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6603/comments
https://api.github.com/repos/huggingface/datasets/issues/6603/events
https://github.com/huggingface/datasets/issues/6603
2,089,230,766
I_kwDODunzps58hyGu
6,603
datasets map `cache_file_name` does not work
{ "avatar_url": "https://avatars.githubusercontent.com/u/35147961?v=4", "events_url": "https://api.github.com/users/ChenchaoZhao/events{/privacy}", "followers_url": "https://api.github.com/users/ChenchaoZhao/followers", "following_url": "https://api.github.com/users/ChenchaoZhao/following{/other_user}", "gists_url": "https://api.github.com/users/ChenchaoZhao/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ChenchaoZhao", "id": 35147961, "login": "ChenchaoZhao", "node_id": "MDQ6VXNlcjM1MTQ3OTYx", "organizations_url": "https://api.github.com/users/ChenchaoZhao/orgs", "received_events_url": "https://api.github.com/users/ChenchaoZhao/received_events", "repos_url": "https://api.github.com/users/ChenchaoZhao/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ChenchaoZhao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ChenchaoZhao/subscriptions", "type": "User", "url": "https://api.github.com/users/ChenchaoZhao", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "Unfortunately, I'm unable to reproduce this error. Can you share the reproducer?", "```\r\nds = datasets.Dataset.from_dict(dict(a=[i for i in range(100)]))\r\nds.map(lambda item: dict(b=item['a'] * 2), cache_file_name=\"/tmp/whatever-fn\") # this worked\r\nds.map(lambda item: dict(b=item['a'] * 2), cache_file_name=\"/tmp/whatever-folder/filename\") # this failed\r\nds.map(lambda item: dict(b=item['a'] * 2), cache_file_name=\"/tmp/whatever-folder/\") # this failed\r\n\r\n\r\nFileNotFoundError: [Errno 2] No such file or directory: '/tmp/whatever-folder/tmp1_izxvoo'\r\n```\r\n\r\nIt will fail if the filename parents do not exists. If we have `os.makedirs(\"/tmp/whatever-folder\")`, then it worked.\r\n\r\nMaybe add the `mkdir -p` into the map function?" ]
2024-01-18T23:08:30Z
2024-01-28T04:01:15Z
null
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug In the documentation `datasets.Dataset.map` arg `cache_file_name` is said to be a string, but it doesn't work. ### Steps to reproduce the bug 1. pick a dataset 2. write a map function 3. do `ds.map(..., cache_file_name='some_filename')` 4. it crashes ### Expected behavior It will tell you the filename you specified does not exist or it will generate a new file and tell you the filename does not exist. ### Environment info - `datasets` version: 2.16.0 - Platform: Linux-5.10.201-168.748.amzn2int.x86_64-x86_64-with-glibc2.26 - Python version: 3.10.13 - `huggingface_hub` version: 0.20.2 - PyArrow version: 14.0.2 - Pandas version: 2.1.4 - `fsspec` version: 2023.12.2
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6603/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6603/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6923
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6923/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6923/comments
https://api.github.com/repos/huggingface/datasets/issues/6923/events
https://github.com/huggingface/datasets/issues/6923
2,319,292,872
I_kwDODunzps6KPZnI
6,923
Export Parquet Tablet Audio-Set is null bytes in Arrow
{ "avatar_url": "https://avatars.githubusercontent.com/u/140120605?v=4", "events_url": "https://api.github.com/users/anioji/events{/privacy}", "followers_url": "https://api.github.com/users/anioji/followers", "following_url": "https://api.github.com/users/anioji/following{/other_user}", "gists_url": "https://api.github.com/users/anioji/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/anioji", "id": 140120605, "login": "anioji", "node_id": "U_kgDOCFoSHQ", "organizations_url": "https://api.github.com/users/anioji/orgs", "received_events_url": "https://api.github.com/users/anioji/received_events", "repos_url": "https://api.github.com/users/anioji/repos", "site_admin": false, "starred_url": "https://api.github.com/users/anioji/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anioji/subscriptions", "type": "User", "url": "https://api.github.com/users/anioji", "user_view_type": "public" }
[]
open
false
null
[]
null
[]
2024-05-27T14:27:57Z
2024-05-27T14:27:57Z
null
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug Exporting the processed audio inside the table with the dataset.to_parquet function, the object pyarrow {bytes: null, path: "Some/Path"} At the same time, the same dataset uploaded to the hub has bit arrays ![Screenshot from 2024-05-27 19-14-49](https://github.com/huggingface/datasets/assets/140120605/ddfba089-426f-4659-9df4-7a634c948b9e) ![Screenshot from 2024-05-27 19-12-51](https://github.com/huggingface/datasets/assets/140120605/4cf8c0a1-650e-491b-86c8-b475c284a021) ### Steps to reproduce the bug 1.Get dataset from audio and cast it 2.Export and push dataset 3.It’s scary to be indignant at the difference in the uploaded dataset and the fact that it was saved locally ```py from datasets import Dataset, Audio df = Dataset.from_csv("./datasets.csv") df = df.cast_column("audio", Audio(16000)) df.to_parquet("./datasets.parquet") df.push_to_hub(repo_id="************", token="**********************") ``` You can use "try replicate case" for this [replicate_packet.zip](https://github.com/huggingface/datasets/files/15457114/replicate_packet.zip) ### Expected behavior Two parquet tables identical in content. It is obvious? ### Environment info Python 3.11+ (I try did it in 3.12 and got same result )
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6923/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6923/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7484
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7484/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7484/comments
https://api.github.com/repos/huggingface/datasets/issues/7484/events
https://github.com/huggingface/datasets/pull/7484
2,953,677,168
PR_kwDODunzps6Qbevn
7,484
release: 3.5.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7484). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-03-27T16:33:27Z
2025-03-27T16:35:44Z
2025-03-27T16:34:22Z
MEMBER
null
null
null
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7484/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7484/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7484.diff", "html_url": "https://github.com/huggingface/datasets/pull/7484", "merged_at": "2025-03-27T16:34:22Z", "patch_url": "https://github.com/huggingface/datasets/pull/7484.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7484" }
https://api.github.com/repos/huggingface/datasets/issues/6420
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6420/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6420/comments
https://api.github.com/repos/huggingface/datasets/issues/6420/events
https://github.com/huggingface/datasets/pull/6420
1,994,278,903
PR_kwDODunzps5ffhdi
6,420
Set dev version
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6420). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004536 / 0.011353 (-0.006816) | 0.002979 / 0.011008 (-0.008030) | 0.061984 / 0.038508 (0.023476) | 0.029382 / 0.023109 (0.006273) | 0.245237 / 0.275898 (-0.030661) | 0.270571 / 0.323480 (-0.052909) | 0.003956 / 0.007986 (-0.004029) | 0.002453 / 0.004328 (-0.001876) | 0.047967 / 0.004250 (0.043717) | 0.043695 / 0.037052 (0.006643) | 0.248457 / 0.258489 (-0.010032) | 0.283293 / 0.293841 (-0.010548) | 0.023603 / 0.128546 (-0.104943) | 0.007225 / 0.075646 (-0.068422) | 0.200533 / 0.419271 (-0.218739) | 0.055310 / 0.043533 (0.011777) | 0.245152 / 0.255139 (-0.009987) | 0.267187 / 0.283200 (-0.016012) | 0.018158 / 0.141683 (-0.123525) | 1.126079 / 1.452155 (-0.326075) | 1.185137 / 1.492716 (-0.307580) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092436 / 0.018006 (0.074430) | 0.300132 / 0.000490 (0.299642) | 0.000206 / 0.000200 (0.000006) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018476 / 0.037411 (-0.018935) | 0.062827 / 0.014526 (0.048301) | 0.074605 / 0.176557 (-0.101952) | 0.119768 / 0.737135 (-0.617368) | 0.076044 / 0.296338 (-0.220294) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279717 / 0.215209 (0.064508) | 2.752308 / 2.077655 (0.674654) | 1.434954 / 1.504120 (-0.069166) | 1.314700 / 1.541195 (-0.226495) | 1.347689 / 1.468490 (-0.120802) | 0.400332 / 4.584777 (-4.184445) | 2.383024 / 3.745712 (-1.362689) | 2.583130 / 5.269862 (-2.686732) | 1.567670 / 4.565676 (-2.998007) | 0.045446 / 0.424275 (-0.378829) | 0.004813 / 0.007607 (-0.002794) | 0.336191 / 0.226044 (0.110147) | 3.319837 / 2.268929 (1.050909) | 1.816808 / 55.444624 (-53.627817) | 1.539052 / 6.876477 (-5.337424) | 1.550765 / 2.142072 (-0.591307) | 0.484253 / 4.805227 (-4.320974) | 0.100494 / 6.500664 (-6.400170) | 0.041614 / 0.075469 (-0.033855) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.940857 / 1.841788 (-0.900931) | 11.784946 / 8.074308 (3.710638) | 10.397038 / 10.191392 (0.205646) | 0.141458 / 0.680424 (-0.538965) | 0.014193 / 0.534201 (-0.520008) | 0.268304 / 0.579283 (-0.310979) | 0.267059 / 0.434364 (-0.167305) | 0.309389 / 0.540337 (-0.230949) | 0.420628 / 1.386936 (-0.966308) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004776 / 0.011353 (-0.006577) | 0.002941 / 0.011008 (-0.008067) | 0.048659 / 0.038508 (0.010151) | 0.053334 / 0.023109 (0.030225) | 0.273342 / 0.275898 (-0.002556) | 0.302278 / 0.323480 (-0.021202) | 0.004001 / 0.007986 (-0.003984) | 0.002414 / 0.004328 (-0.001914) | 0.047504 / 0.004250 (0.043254) | 0.038581 / 0.037052 (0.001529) | 0.277768 / 0.258489 (0.019279) | 0.306772 / 0.293841 (0.012931) | 0.024146 / 0.128546 (-0.104400) | 0.007233 / 0.075646 (-0.068413) | 0.053308 / 0.419271 (-0.365964) | 0.032617 / 0.043533 (-0.010916) | 0.277390 / 0.255139 (0.022251) | 0.296015 / 0.283200 (0.012816) | 0.018733 / 0.141683 (-0.122950) | 1.124895 / 1.452155 (-0.327260) | 1.182579 / 1.492716 (-0.310137) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093375 / 0.018006 (0.075369) | 0.301555 / 0.000490 (0.301066) | 0.000217 / 0.000200 (0.000017) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021284 / 0.037411 (-0.016127) | 0.070158 / 0.014526 (0.055632) | 0.080187 / 0.176557 (-0.096370) | 0.119282 / 0.737135 (-0.617854) | 0.081672 / 0.296338 (-0.214666) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.314396 / 0.215209 (0.099187) | 2.975114 / 2.077655 (0.897459) | 1.724658 / 1.504120 (0.220539) | 1.604464 / 1.541195 (0.063269) | 1.652736 / 1.468490 (0.184246) | 0.395064 / 4.584777 (-4.189713) | 2.412768 / 3.745712 (-1.332944) | 2.564427 / 5.269862 (-2.705435) | 1.507627 / 4.565676 (-3.058050) | 0.045463 / 0.424275 (-0.378812) | 0.004797 / 0.007607 (-0.002810) | 0.383115 / 0.226044 (0.157071) | 3.501976 / 2.268929 (1.233048) | 2.087512 / 55.444624 (-53.357113) | 1.793132 / 6.876477 (-5.083345) | 1.804178 / 2.142072 (-0.337895) | 0.468287 / 4.805227 (-4.336940) | 0.097247 / 6.500664 (-6.403417) | 0.041139 / 0.075469 (-0.034330) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.976034 / 1.841788 (-0.865754) | 12.431248 / 8.074308 (4.356940) | 10.896064 / 10.191392 (0.704672) | 0.129137 / 0.680424 (-0.551287) | 0.015636 / 0.534201 (-0.518565) | 0.268219 / 0.579283 (-0.311064) | 0.278345 / 0.434364 (-0.156019) | 0.302696 / 0.540337 (-0.237642) | 0.408465 / 1.386936 (-0.978471) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#51c53e94acd7a273c24899c045446df021314cd2 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007703 / 0.011353 (-0.003650) | 0.004614 / 0.011008 (-0.006394) | 0.101425 / 0.038508 (0.062917) | 0.040122 / 0.023109 (0.017013) | 0.398890 / 0.275898 (0.122992) | 0.424392 / 0.323480 (0.100912) | 0.005411 / 0.007986 (-0.002575) | 0.003747 / 0.004328 (-0.000582) | 0.080494 / 0.004250 (0.076243) | 0.059392 / 0.037052 (0.022340) | 0.398025 / 0.258489 (0.139536) | 0.454293 / 0.293841 (0.160452) | 0.043662 / 0.128546 (-0.084884) | 0.013726 / 0.075646 (-0.061920) | 0.352910 / 0.419271 (-0.066362) | 0.088572 / 0.043533 (0.045039) | 0.401677 / 0.255139 (0.146538) | 0.421774 / 0.283200 (0.138575) | 0.033377 / 0.141683 (-0.108305) | 1.728499 / 1.452155 (0.276344) | 1.821557 / 1.492716 (0.328841) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230744 / 0.018006 (0.212738) | 0.496188 / 0.000490 (0.495698) | 0.010315 / 0.000200 (0.010115) | 0.000402 / 0.000054 (0.000348) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028859 / 0.037411 (-0.008552) | 0.089688 / 0.014526 (0.075163) | 0.111697 / 0.176557 (-0.064860) | 0.183238 / 0.737135 (-0.553898) | 0.112407 / 0.296338 (-0.183931) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.558394 / 0.215209 (0.343185) | 5.643048 / 2.077655 (3.565393) | 2.454622 / 1.504120 (0.950502) | 2.183338 / 1.541195 (0.642143) | 2.324793 / 1.468490 (0.856303) | 0.859482 / 4.584777 (-3.725295) | 4.959346 / 3.745712 (1.213634) | 4.599224 / 5.269862 (-0.670638) | 2.764382 / 4.565676 (-1.801295) | 0.089976 / 0.424275 (-0.334299) | 0.008144 / 0.007607 (0.000537) | 0.634675 / 0.226044 (0.408631) | 6.555693 / 2.268929 (4.286765) | 3.080252 / 55.444624 (-52.364373) | 2.442715 / 6.876477 (-4.433762) | 2.475126 / 2.142072 (0.333053) | 0.986459 / 4.805227 (-3.818768) | 0.193859 / 6.500664 (-6.306805) | 0.063652 / 0.075469 (-0.011817) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.545318 / 1.841788 (-0.296469) | 21.928751 / 8.074308 (13.854442) | 20.598229 / 10.191392 (10.406837) | 0.234046 / 0.680424 (-0.446377) | 0.025947 / 0.534201 (-0.508254) | 0.459773 / 0.579283 (-0.119510) | 0.598026 / 0.434364 (0.163662) | 0.555260 / 0.540337 (0.014922) | 0.782767 / 1.386936 (-0.604169) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009322 / 0.011353 (-0.002030) | 0.004650 / 0.011008 (-0.006358) | 0.079326 / 0.038508 (0.040818) | 0.079112 / 0.023109 (0.056003) | 0.428708 / 0.275898 (0.152810) | 0.481647 / 0.323480 (0.158168) | 0.006419 / 0.007986 (-0.001566) | 0.003878 / 0.004328 (-0.000450) | 0.079013 / 0.004250 (0.074762) | 0.058107 / 0.037052 (0.021055) | 0.436967 / 0.258489 (0.178478) | 0.501120 / 0.293841 (0.207279) | 0.052972 / 0.128546 (-0.075574) | 0.014414 / 0.075646 (-0.061232) | 0.098587 / 0.419271 (-0.320685) | 0.061626 / 0.043533 (0.018093) | 0.451623 / 0.255139 (0.196484) | 0.468893 / 0.283200 (0.185693) | 0.032479 / 0.141683 (-0.109203) | 1.911743 / 1.452155 (0.459588) | 1.969024 / 1.492716 (0.476308) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.232015 / 0.018006 (0.214009) | 0.508637 / 0.000490 (0.508147) | 0.005470 / 0.000200 (0.005270) | 0.000131 / 0.000054 (0.000076) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035345 / 0.037411 (-0.002066) | 0.106319 / 0.014526 (0.091794) | 0.117205 / 0.176557 (-0.059352) | 0.176527 / 0.737135 (-0.560608) | 0.121566 / 0.296338 (-0.174773) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.584920 / 0.215209 (0.369711) | 5.745688 / 2.077655 (3.668034) | 2.519875 / 1.504120 (1.015755) | 2.197593 / 1.541195 (0.656398) | 2.296670 / 1.468490 (0.828180) | 0.831938 / 4.584777 (-3.752839) | 5.130594 / 3.745712 (1.384882) | 4.581385 / 5.269862 (-0.688476) | 2.829516 / 4.565676 (-1.736161) | 0.099015 / 0.424275 (-0.325260) | 0.011468 / 0.007607 (0.003861) | 0.702717 / 0.226044 (0.476672) | 6.856099 / 2.268929 (4.587170) | 3.372966 / 55.444624 (-52.071658) | 2.567664 / 6.876477 (-4.308812) | 2.699200 / 2.142072 (0.557127) | 0.992316 / 4.805227 (-3.812911) | 0.190463 / 6.500664 (-6.310201) | 0.063305 / 0.075469 (-0.012165) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.591491 / 1.841788 (-0.250296) | 21.696492 / 8.074308 (13.622184) | 19.695404 / 10.191392 (9.504012) | 0.222853 / 0.680424 (-0.457571) | 0.032936 / 0.534201 (-0.501265) | 0.431209 / 0.579283 (-0.148074) | 0.543101 / 0.434364 (0.108737) | 0.543427 / 0.540337 (0.003089) | 0.742102 / 1.386936 (-0.644834) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#534a227179265df9093230885613c95390325705 \"CML watermark\")\n" ]
2023-11-15T08:22:19Z
2023-11-15T08:33:36Z
2023-11-15T08:22:33Z
MEMBER
null
null
null
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6420/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6420/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6420.diff", "html_url": "https://github.com/huggingface/datasets/pull/6420", "merged_at": "2023-11-15T08:22:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/6420.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6420" }
https://api.github.com/repos/huggingface/datasets/issues/4955
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4955/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4955/comments
https://api.github.com/repos/huggingface/datasets/issues/4955/events
https://github.com/huggingface/datasets/issues/4955
1,366,382,314
I_kwDODunzps5RcVbq
4,955
Raise a more precise error when the URL is unreachable in streaming mode
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[]
2022-09-08T13:52:37Z
2022-09-08T13:53:36Z
null
COLLABORATOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
See for example: - https://github.com/huggingface/datasets/issues/3191 - https://github.com/huggingface/datasets/issues/3186 It would help provide clearer information on the Hub and help the dataset maintainer solve the issue by themselves quicker. Currently: - https://huggingface.co/datasets/compguesswhat <img width="1029" alt="Capture d’écran 2022-09-08 aΜ€ 15 51 37" src="https://user-images.githubusercontent.com/1676121/189139946-6deffb91-f21b-4281-8825-a98026c69740.png"> - https://huggingface.co/datasets/nli_tr <img width="1032" alt="Capture d’écran 2022-09-08 aΜ€ 15 51 44" src="https://user-images.githubusercontent.com/1676121/189139963-d26490ed-ad23-48ea-9cfc-1ab9c4d08d0c.png"> cc @albertvillanova
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4955/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4955/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7202
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7202/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7202/comments
https://api.github.com/repos/huggingface/datasets/issues/7202/events
https://github.com/huggingface/datasets/issues/7202
2,572,583,798
I_kwDODunzps6ZVoN2
7,202
`from_parquet` return type annotation
{ "avatar_url": "https://avatars.githubusercontent.com/u/45285915?v=4", "events_url": "https://api.github.com/users/saiden89/events{/privacy}", "followers_url": "https://api.github.com/users/saiden89/followers", "following_url": "https://api.github.com/users/saiden89/following{/other_user}", "gists_url": "https://api.github.com/users/saiden89/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/saiden89", "id": 45285915, "login": "saiden89", "node_id": "MDQ6VXNlcjQ1Mjg1OTE1", "organizations_url": "https://api.github.com/users/saiden89/orgs", "received_events_url": "https://api.github.com/users/saiden89/received_events", "repos_url": "https://api.github.com/users/saiden89/repos", "site_admin": false, "starred_url": "https://api.github.com/users/saiden89/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/saiden89/subscriptions", "type": "User", "url": "https://api.github.com/users/saiden89", "user_view_type": "public" }
[]
open
false
null
[]
null
[]
2024-10-08T09:08:10Z
2024-10-08T09:08:10Z
null
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug As already posted in https://github.com/microsoft/pylance-release/issues/6534, the correct type hinting fails when building a dataset using the `from_parquet` constructor. Their suggestion is to comprehensively annotate the method's return type to better align with the docstring information. ### Steps to reproduce the bug ```python from datasets import Dataset dataset = Dataset.from_parquet(path_or_paths="file") dataset.map(lambda x: {"new": x["old"]}, batched=True) ``` ### Expected behavior map is a [valid](https://huggingface.co/docs/datasets/v3.0.1/en/package_reference/main_classes#datasets.Dataset.map), no error should be thrown. ### Environment info - `datasets` version: 3.0.1 - Platform: macOS-15.0.1-arm64-arm-64bit - Python version: 3.12.6 - `huggingface_hub` version: 0.25.1 - PyArrow version: 17.0.0 - Pandas version: 2.2.3 - `fsspec` version: 2024.6.1
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7202/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7202/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6816
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6816/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6816/comments
https://api.github.com/repos/huggingface/datasets/issues/6816/events
https://github.com/huggingface/datasets/pull/6816
2,246,264,911
PR_kwDODunzps5s0MYO
6,816
Improve typing of Dataset.search, matching definition
{ "avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4", "events_url": "https://api.github.com/users/Dref360/events{/privacy}", "followers_url": "https://api.github.com/users/Dref360/followers", "following_url": "https://api.github.com/users/Dref360/following{/other_user}", "gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Dref360", "id": 8976546, "login": "Dref360", "node_id": "MDQ6VXNlcjg5NzY1NDY=", "organizations_url": "https://api.github.com/users/Dref360/orgs", "received_events_url": "https://api.github.com/users/Dref360/received_events", "repos_url": "https://api.github.com/users/Dref360/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dref360/subscriptions", "type": "User", "url": "https://api.github.com/users/Dref360", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6816). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Hi! This is a breaking change. A better solution is to check for \"indexable\" types in `__getitem__` to support keys such as `np.int64`:\r\n```python\r\nimport operator\r\n\r\ndef _query_table_with_indices_mapping(...): # or _query_table\r\n ...\r\n try:\r\n operator.index(key)\r\n except TypeError:\r\n pass\r\n \r\n _raise_bad_key_type(key)\r\n```", "Sounds good! We should still update type annotations for SearchResult in my opinion." ]
2024-04-16T14:53:39Z
2024-04-16T15:54:10Z
2024-04-16T15:54:10Z
CONTRIBUTOR
null
null
null
Previously, the output of `score, indices = Dataset.search(...)` would be numpy arrays. The definition in `SearchResult` is a `List[int]` so this PR now matched the expected type. The previous behavior is a bit annoying as `Dataset.__getitem__` doesn't support `numpy.int64` which forced me to convert `indices` to int eg: ```python score, indices = ds.search(...) item = ds[int(indices[0])] ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4", "events_url": "https://api.github.com/users/Dref360/events{/privacy}", "followers_url": "https://api.github.com/users/Dref360/followers", "following_url": "https://api.github.com/users/Dref360/following{/other_user}", "gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Dref360", "id": 8976546, "login": "Dref360", "node_id": "MDQ6VXNlcjg5NzY1NDY=", "organizations_url": "https://api.github.com/users/Dref360/orgs", "received_events_url": "https://api.github.com/users/Dref360/received_events", "repos_url": "https://api.github.com/users/Dref360/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dref360/subscriptions", "type": "User", "url": "https://api.github.com/users/Dref360", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6816/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6816/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6816.diff", "html_url": "https://github.com/huggingface/datasets/pull/6816", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6816.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6816" }
https://api.github.com/repos/huggingface/datasets/issues/6006
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6006/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6006/comments
https://api.github.com/repos/huggingface/datasets/issues/6006/events
https://github.com/huggingface/datasets/issues/6006
1,788,855,582
I_kwDODunzps5qn8Ue
6,006
NotADirectoryError when loading gigawords
{ "avatar_url": "https://avatars.githubusercontent.com/u/115634163?v=4", "events_url": "https://api.github.com/users/xipq/events{/privacy}", "followers_url": "https://api.github.com/users/xipq/followers", "following_url": "https://api.github.com/users/xipq/following{/other_user}", "gists_url": "https://api.github.com/users/xipq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/xipq", "id": 115634163, "login": "xipq", "node_id": "U_kgDOBuRv8w", "organizations_url": "https://api.github.com/users/xipq/orgs", "received_events_url": "https://api.github.com/users/xipq/received_events", "repos_url": "https://api.github.com/users/xipq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/xipq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xipq/subscriptions", "type": "User", "url": "https://api.github.com/users/xipq", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "issue due to corrupted download files. resolved after cleaning download cache. sorry for any inconvinence." ]
2023-07-05T06:23:41Z
2023-07-05T06:31:02Z
2023-07-05T06:31:01Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug got `NotADirectoryError` whtn loading gigawords dataset ### Steps to reproduce the bug When running ``` import datasets datasets.load_dataset('gigaword') ``` Got the following exception: ```bash Traceback (most recent call last): [0/1862] File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 1629, in _prepare_split_single for key, record in generator: File "/home/x/.cache/huggingface/modules/datasets_modules/datasets/gigaword/ea83a8b819190acac5f2dae011fad51dccf269a0604ec5dd24795b 64efb424b6/gigaword.py", line 115, in _generate_examples with open(src_path, encoding="utf-8") as f_d, open(tgt_path, encoding="utf-8") as f_s: File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/streaming.py", line 71, in wrapper return function(*args, use_auth_token=use_auth_token, **kwargs) File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/download/streaming_download_manager.py", line 493, in xope n return open(main_hop, mode, *args, **kwargs) NotADirectoryError: [Errno 20] Not a directory: '/home/x/.cache/huggingface/datasets/downloads/6da52431bb5124d90cf51a0187d2dbee9046e 89780c4be7599794a4f559048ec/org_data/train.src.txt' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "gigaword.py", line 38, in <module> main() File "gigaword.py", line 35, in main train, dev, test = dataset.generate_k_shot_data(k=32, seed=seed, path="../data/") File "/home/x/MICL/preprocess/fewshot_gym_dataset.py", line 199, in generate_k_shot_data dataset = self.load_dataset() File "gigaword.py", line 29, in load_dataset return datasets.load_dataset('gigaword') File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/load.py", line 1809, in load_dataset builder_instance.download_and_prepare( File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 909, in download_and_prepare self._download_and_prepare( File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 1670, in _download_and_prepare super()._download_and_prepare( File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 1004, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 1508, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 1665, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.builder.DatasetGenerationError: An error occurred while generating the dataset ``` ### Expected behavior Download and process the dataset successfully ### Environment info - `datasets` version: 2.13.1 - Platform: Linux-5.0.0-1032-azure-x86_64-with-glibc2.10 - Python version: 3.8.0 - Huggingface_hub version: 0.15.1 - PyArrow version: 12.0.1 - Pandas version: 2.0.3
{ "avatar_url": "https://avatars.githubusercontent.com/u/115634163?v=4", "events_url": "https://api.github.com/users/xipq/events{/privacy}", "followers_url": "https://api.github.com/users/xipq/followers", "following_url": "https://api.github.com/users/xipq/following{/other_user}", "gists_url": "https://api.github.com/users/xipq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/xipq", "id": 115634163, "login": "xipq", "node_id": "U_kgDOBuRv8w", "organizations_url": "https://api.github.com/users/xipq/orgs", "received_events_url": "https://api.github.com/users/xipq/received_events", "repos_url": "https://api.github.com/users/xipq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/xipq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xipq/subscriptions", "type": "User", "url": "https://api.github.com/users/xipq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6006/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6006/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/6551
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6551/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6551/comments
https://api.github.com/repos/huggingface/datasets/issues/6551/events
https://github.com/huggingface/datasets/pull/6551
2,062,768,400
PR_kwDODunzps5jEi1C
6,551
Fix parallel downloads for datasets without scripts
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6551). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005002 / 0.011353 (-0.006350) | 0.003300 / 0.011008 (-0.007708) | 0.062509 / 0.038508 (0.024001) | 0.029807 / 0.023109 (0.006698) | 0.249935 / 0.275898 (-0.025963) | 0.264320 / 0.323480 (-0.059160) | 0.003790 / 0.007986 (-0.004195) | 0.002554 / 0.004328 (-0.001774) | 0.048207 / 0.004250 (0.043956) | 0.042033 / 0.037052 (0.004981) | 0.245725 / 0.258489 (-0.012764) | 0.276695 / 0.293841 (-0.017146) | 0.026502 / 0.128546 (-0.102044) | 0.010379 / 0.075646 (-0.065268) | 0.207002 / 0.419271 (-0.212269) | 0.034648 / 0.043533 (-0.008885) | 0.247957 / 0.255139 (-0.007182) | 0.263921 / 0.283200 (-0.019278) | 0.017710 / 0.141683 (-0.123973) | 1.105851 / 1.452155 (-0.346304) | 1.163315 / 1.492716 (-0.329401) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089842 / 0.018006 (0.071836) | 0.352499 / 0.000490 (0.352009) | 0.000201 / 0.000200 (0.000001) | 0.000054 / 0.000054 (-0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018094 / 0.037411 (-0.019317) | 0.060463 / 0.014526 (0.045937) | 0.073257 / 0.176557 (-0.103300) | 0.119771 / 0.737135 (-0.617364) | 0.075210 / 0.296338 (-0.221128) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288365 / 0.215209 (0.073156) | 2.825377 / 2.077655 (0.747722) | 1.532436 / 1.504120 (0.028316) | 1.393475 / 1.541195 (-0.147719) | 1.381859 / 1.468490 (-0.086632) | 0.564155 / 4.584777 (-4.020622) | 2.398177 / 3.745712 (-1.347535) | 2.730271 / 5.269862 (-2.539590) | 1.713779 / 4.565676 (-2.851898) | 0.062789 / 0.424275 (-0.361486) | 0.004991 / 0.007607 (-0.002616) | 0.340789 / 0.226044 (0.114744) | 3.323543 / 2.268929 (1.054615) | 1.861925 / 55.444624 (-53.582700) | 1.555181 / 6.876477 (-5.321296) | 1.559512 / 2.142072 (-0.582560) | 0.634565 / 4.805227 (-4.170663) | 0.116529 / 6.500664 (-6.384135) | 0.041312 / 0.075469 (-0.034157) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.945739 / 1.841788 (-0.896049) | 11.376130 / 8.074308 (3.301822) | 10.007752 / 10.191392 (-0.183640) | 0.126815 / 0.680424 (-0.553609) | 0.013898 / 0.534201 (-0.520303) | 0.287438 / 0.579283 (-0.291845) | 0.261532 / 0.434364 (-0.172832) | 0.320197 / 0.540337 (-0.220140) | 0.414444 / 1.386936 (-0.972492) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004994 / 0.011353 (-0.006359) | 0.003407 / 0.011008 (-0.007601) | 0.049281 / 0.038508 (0.010773) | 0.042815 / 0.023109 (0.019706) | 0.268291 / 0.275898 (-0.007607) | 0.285877 / 0.323480 (-0.037603) | 0.004006 / 0.007986 (-0.003980) | 0.002607 / 0.004328 (-0.001721) | 0.047682 / 0.004250 (0.043431) | 0.044281 / 0.037052 (0.007228) | 0.268287 / 0.258489 (0.009798) | 0.298649 / 0.293841 (0.004808) | 0.028607 / 0.128546 (-0.099939) | 0.010367 / 0.075646 (-0.065279) | 0.057114 / 0.419271 (-0.362158) | 0.053753 / 0.043533 (0.010220) | 0.269010 / 0.255139 (0.013871) | 0.285057 / 0.283200 (0.001858) | 0.017693 / 0.141683 (-0.123990) | 1.134718 / 1.452155 (-0.317436) | 1.186609 / 1.492716 (-0.306107) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091109 / 0.018006 (0.073103) | 0.298603 / 0.000490 (0.298113) | 0.000216 / 0.000200 (0.000016) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022125 / 0.037411 (-0.015286) | 0.076570 / 0.014526 (0.062044) | 0.088903 / 0.176557 (-0.087654) | 0.126427 / 0.737135 (-0.610708) | 0.091001 / 0.296338 (-0.205338) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300332 / 0.215209 (0.085123) | 2.971106 / 2.077655 (0.893452) | 1.617886 / 1.504120 (0.113766) | 1.476679 / 1.541195 (-0.064516) | 1.483750 / 1.468490 (0.015260) | 0.582569 / 4.584777 (-4.002208) | 2.441804 / 3.745712 (-1.303908) | 2.753927 / 5.269862 (-2.515935) | 1.733546 / 4.565676 (-2.832130) | 0.062653 / 0.424275 (-0.361622) | 0.005019 / 0.007607 (-0.002588) | 0.355556 / 0.226044 (0.129512) | 3.497431 / 2.268929 (1.228503) | 1.951711 / 55.444624 (-53.492913) | 1.663874 / 6.876477 (-5.212602) | 1.657363 / 2.142072 (-0.484709) | 0.653488 / 4.805227 (-4.151739) | 0.117055 / 6.500664 (-6.383609) | 0.040687 / 0.075469 (-0.034782) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.969485 / 1.841788 (-0.872303) | 12.064793 / 8.074308 (3.990485) | 10.851531 / 10.191392 (0.660139) | 0.129060 / 0.680424 (-0.551364) | 0.015339 / 0.534201 (-0.518862) | 0.287215 / 0.579283 (-0.292069) | 0.276545 / 0.434364 (-0.157819) | 0.322748 / 0.540337 (-0.217589) | 0.421363 / 1.386936 (-0.965573) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d26abadce0b884db32382b92422d8a6aa997d40a \"CML watermark\")\n", "@lhoestq \r\n<img width=\"1015\" alt=\"image\" src=\"https://github.com/huggingface/datasets/assets/17604849/b19b9d92-c6f7-4e3a-8c9d-1178e56c67ea\">\r\nit's still not fixed =(", "@lhoestq i was thinking uninstalling `datasets` and then `pip install git+https://github.com/huggingface/datasets.git` has to fix it. Buuuuut. I'm not sure what's going on actually...\r\n\r\nNow instead of showing progress bars one after another it seems to be downloading the dataset way way way faster (like 4 mins instead of 58, thank you very much) but does not show any progress bars related to downloading at all.\r\n\r\n<img width=\"1170\" alt=\"image\" src=\"https://github.com/huggingface/datasets/assets/17604849/21a84908-c44d-41b4-bb0d-8061cab3bc64\">\r\n\r\n<img width=\"1159\" alt=\"image\" src=\"https://github.com/huggingface/datasets/assets/17604849/26684a8a-c10a-4fa2-bd84-cab4f938ffcc\">\r\n" ]
2024-01-02T18:06:18Z
2024-01-06T20:14:57Z
2024-01-03T13:19:48Z
MEMBER
null
null
null
Enable parallel downloads using multiprocessing when `num_proc` is passed to `load_dataset`. It was enabled for datasets with scripts already (if they passed lists to `dl_manager.download`) but not for no-script datasets (we pass dicts {split: [list of files]} to `dl_manager.download` for those ones). I fixed this by parallelising on the lists contained in the data files dicts when possible. I also added a context manager `stack_multiprocessing_download_progress_bars` in `DownloadManager` to stack the progress bard of the downloads (from `cached_path(...)` calls). Otherwise the progress bars overlap each other with an annoying flickering effect.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/6551/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6551/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6551.diff", "html_url": "https://github.com/huggingface/datasets/pull/6551", "merged_at": "2024-01-03T13:19:47Z", "patch_url": "https://github.com/huggingface/datasets/pull/6551.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6551" }
https://api.github.com/repos/huggingface/datasets/issues/7382
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7382/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7382/comments
https://api.github.com/repos/huggingface/datasets/issues/7382/events
https://github.com/huggingface/datasets/pull/7382
2,823,480,924
PR_kwDODunzps6Jo69f
7,382
Add Pandas, PyArrow and Polars docs
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7382). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-01-31T13:22:59Z
2025-01-31T16:30:59Z
2025-01-31T16:30:57Z
MEMBER
null
null
null
(also added the missing numpy docs and fixed a small bug in pyarrow formatting)
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7382/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7382/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7382.diff", "html_url": "https://github.com/huggingface/datasets/pull/7382", "merged_at": "2025-01-31T16:30:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/7382.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7382" }
https://api.github.com/repos/huggingface/datasets/issues/4764
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4764/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4764/comments
https://api.github.com/repos/huggingface/datasets/issues/4764/events
https://github.com/huggingface/datasets/pull/4764
1,321,295,961
PR_kwDODunzps48RMLu
4,764
Update CI badge
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-07-28T18:04:20Z
2022-07-29T11:36:37Z
2022-07-29T11:23:51Z
COLLABORATOR
null
null
null
Replace the old CircleCI badge with a new one for GH Actions.
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4764/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4764/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4764.diff", "html_url": "https://github.com/huggingface/datasets/pull/4764", "merged_at": "2022-07-29T11:23:51Z", "patch_url": "https://github.com/huggingface/datasets/pull/4764.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4764" }
https://api.github.com/repos/huggingface/datasets/issues/6914
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6914/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6914/comments
https://api.github.com/repos/huggingface/datasets/issues/6914/events
https://github.com/huggingface/datasets/pull/6914
2,310,107,326
PR_kwDODunzps5wLi3e
6,914
Preserve JSON column order and support list of strings field
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6914). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005492 / 0.011353 (-0.005861) | 0.004087 / 0.011008 (-0.006921) | 0.065334 / 0.038508 (0.026826) | 0.032282 / 0.023109 (0.009173) | 0.246441 / 0.275898 (-0.029457) | 0.278807 / 0.323480 (-0.044673) | 0.003245 / 0.007986 (-0.004741) | 0.003795 / 0.004328 (-0.000534) | 0.050082 / 0.004250 (0.045832) | 0.050613 / 0.037052 (0.013561) | 0.258885 / 0.258489 (0.000396) | 0.297257 / 0.293841 (0.003416) | 0.028847 / 0.128546 (-0.099699) | 0.011377 / 0.075646 (-0.064270) | 0.206089 / 0.419271 (-0.213182) | 0.037354 / 0.043533 (-0.006178) | 0.257319 / 0.255139 (0.002180) | 0.275134 / 0.283200 (-0.008066) | 0.018064 / 0.141683 (-0.123619) | 1.112371 / 1.452155 (-0.339783) | 1.160909 / 1.492716 (-0.331807) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.101893 / 0.018006 (0.083887) | 0.311084 / 0.000490 (0.310594) | 0.000208 / 0.000200 (0.000008) | 0.000042 / 0.000054 (-0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019548 / 0.037411 (-0.017863) | 0.064396 / 0.014526 (0.049870) | 0.074900 / 0.176557 (-0.101656) | 0.122750 / 0.737135 (-0.614385) | 0.076693 / 0.296338 (-0.219646) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288609 / 0.215209 (0.073400) | 2.831354 / 2.077655 (0.753699) | 1.453961 / 1.504120 (-0.050159) | 1.327702 / 1.541195 (-0.213493) | 1.382140 / 1.468490 (-0.086351) | 0.568465 / 4.584777 (-4.016312) | 2.427199 / 3.745712 (-1.318513) | 2.810586 / 5.269862 (-2.459275) | 1.839227 / 4.565676 (-2.726449) | 0.063219 / 0.424275 (-0.361056) | 0.005111 / 0.007607 (-0.002496) | 0.341447 / 0.226044 (0.115403) | 3.357429 / 2.268929 (1.088501) | 1.806501 / 55.444624 (-53.638123) | 1.541696 / 6.876477 (-5.334781) | 1.755400 / 2.142072 (-0.386673) | 0.661442 / 4.805227 (-4.143785) | 0.120203 / 6.500664 (-6.380461) | 0.044429 / 0.075469 (-0.031040) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.987810 / 1.841788 (-0.853978) | 12.765467 / 8.074308 (4.691159) | 10.497788 / 10.191392 (0.306396) | 0.132723 / 0.680424 (-0.547701) | 0.014484 / 0.534201 (-0.519717) | 0.285763 / 0.579283 (-0.293520) | 0.264377 / 0.434364 (-0.169987) | 0.326971 / 0.540337 (-0.213367) | 0.429432 / 1.386936 (-0.957504) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005996 / 0.011353 (-0.005357) | 0.004092 / 0.011008 (-0.006916) | 0.051660 / 0.038508 (0.013152) | 0.036661 / 0.023109 (0.013552) | 0.271133 / 0.275898 (-0.004765) | 0.295728 / 0.323480 (-0.027752) | 0.004452 / 0.007986 (-0.003534) | 0.002915 / 0.004328 (-0.001413) | 0.050669 / 0.004250 (0.046418) | 0.044431 / 0.037052 (0.007378) | 0.284683 / 0.258489 (0.026194) | 0.318799 / 0.293841 (0.024958) | 0.031094 / 0.128546 (-0.097452) | 0.010810 / 0.075646 (-0.064836) | 0.059740 / 0.419271 (-0.359531) | 0.034912 / 0.043533 (-0.008621) | 0.268779 / 0.255139 (0.013640) | 0.291294 / 0.283200 (0.008095) | 0.019769 / 0.141683 (-0.121914) | 1.124833 / 1.452155 (-0.327322) | 1.168301 / 1.492716 (-0.324416) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097080 / 0.018006 (0.079074) | 0.304636 / 0.000490 (0.304146) | 0.000232 / 0.000200 (0.000032) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023186 / 0.037411 (-0.014225) | 0.082232 / 0.014526 (0.067706) | 0.089427 / 0.176557 (-0.087130) | 0.132715 / 0.737135 (-0.604421) | 0.092820 / 0.296338 (-0.203518) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300672 / 0.215209 (0.085463) | 2.969603 / 2.077655 (0.891948) | 1.577827 / 1.504120 (0.073707) | 1.440768 / 1.541195 (-0.100427) | 1.494526 / 1.468490 (0.026035) | 0.574599 / 4.584777 (-4.010178) | 0.963300 / 3.745712 (-2.782412) | 2.847854 / 5.269862 (-2.422008) | 1.841248 / 4.565676 (-2.724428) | 0.062321 / 0.424275 (-0.361954) | 0.005389 / 0.007607 (-0.002218) | 0.350853 / 0.226044 (0.124808) | 3.463514 / 2.268929 (1.194586) | 1.937661 / 55.444624 (-53.506964) | 1.665320 / 6.876477 (-5.211157) | 1.849028 / 2.142072 (-0.293044) | 0.655333 / 4.805227 (-4.149894) | 0.119062 / 6.500664 (-6.381602) | 0.043387 / 0.075469 (-0.032082) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.004118 / 1.841788 (-0.837670) | 13.350894 / 8.074308 (5.276585) | 11.179363 / 10.191392 (0.987971) | 0.135169 / 0.680424 (-0.545255) | 0.016298 / 0.534201 (-0.517903) | 0.288467 / 0.579283 (-0.290816) | 0.132712 / 0.434364 (-0.301651) | 0.325436 / 0.540337 (-0.214901) | 0.413406 / 1.386936 (-0.973530) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#670e1cf31606f397ae0f858b568b1b4ed50c1843 \"CML watermark\")\n" ]
2024-05-22T09:58:54Z
2024-05-29T13:18:47Z
2024-05-29T13:12:23Z
MEMBER
null
null
null
Preserve column order when loading from a JSON file with a list of dict (or with a field containing a list of dicts). Additionally, support JSON file with a list of strings field. Fix #6913.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6914/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6914/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6914.diff", "html_url": "https://github.com/huggingface/datasets/pull/6914", "merged_at": "2024-05-29T13:12:23Z", "patch_url": "https://github.com/huggingface/datasets/pull/6914.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6914" }
https://api.github.com/repos/huggingface/datasets/issues/5851
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5851/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5851/comments
https://api.github.com/repos/huggingface/datasets/issues/5851/events
https://github.com/huggingface/datasets/issues/5851
1,707,907,048
I_kwDODunzps5lzJfo
5,851
Error message not clear in interleaving datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/17240858?v=4", "events_url": "https://api.github.com/users/surya-narayanan/events{/privacy}", "followers_url": "https://api.github.com/users/surya-narayanan/followers", "following_url": "https://api.github.com/users/surya-narayanan/following{/other_user}", "gists_url": "https://api.github.com/users/surya-narayanan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/surya-narayanan", "id": 17240858, "login": "surya-narayanan", "node_id": "MDQ6VXNlcjE3MjQwODU4", "organizations_url": "https://api.github.com/users/surya-narayanan/orgs", "received_events_url": "https://api.github.com/users/surya-narayanan/received_events", "repos_url": "https://api.github.com/users/surya-narayanan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/surya-narayanan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/surya-narayanan/subscriptions", "type": "User", "url": "https://api.github.com/users/surya-narayanan", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" } ]
null
[]
2023-05-11T20:52:13Z
2023-05-23T10:32:59Z
2023-05-23T10:32:59Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### System Info standard env ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I'm trying to interleave 'sciq', 'wiki' and the 'pile-enron' dataset. I think the error I made was that I loaded the train split of one, but for the other but the error is not too helpful- ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) [/home/suryahari/Vornoi/save_model_ops.py](https://vscode-remote+ssh-002dremote-002bthomsonlab-002d2-002ejamesgornet-002ecom.vscode-resource.vscode-cdn.net/home/suryahari/Vornoi/save_model_ops.py) in line 3 [41](file:///home/suryahari/Vornoi/save_model_ops.py?line=40) # %% ----> [43](file:///home/suryahari/Vornoi/save_model_ops.py?line=42) dataset = interleave_datasets(datasets, stopping_strategy="all_exhausted") File [~/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py:124](https://vscode-remote+ssh-002dremote-002bthomsonlab-002d2-002ejamesgornet-002ecom.vscode-resource.vscode-cdn.net/home/suryahari/~/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py:124), in interleave_datasets(datasets, probabilities, seed, info, split, stopping_strategy) [122](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=121) for dataset in datasets[1:]: [123](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=122) if (map_style and not isinstance(dataset, Dataset)) or (iterable and not isinstance(dataset, IterableDataset)): --> [124](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=123) raise ValueError( [125](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=124) f"Unable to interleave a {type(datasets[0])} with a {type(dataset)}. Expected a list of Dataset objects or a list of IterableDataset objects." [126](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=125) ) [127](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=126) if stopping_strategy not in ["first_exhausted", "all_exhausted"]: [128](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=127) raise ValueError(f"{stopping_strategy} is not supported. Please enter a valid stopping_strategy.") ValueError: Unable to interleave a with a . Expected a list of Dataset objects or a list of IterableDataset objects. ``` ### Expected behavior the error message should hopefully be more clear
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5851/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5851/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/6035
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6035/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6035/comments
https://api.github.com/repos/huggingface/datasets/issues/6035/events
https://github.com/huggingface/datasets/pull/6035
1,805,087,687
PR_kwDODunzps5Vh_QR
6,035
Dataset representation
{ "avatar_url": "https://avatars.githubusercontent.com/u/63643948?v=4", "events_url": "https://api.github.com/users/Ganryuu/events{/privacy}", "followers_url": "https://api.github.com/users/Ganryuu/followers", "following_url": "https://api.github.com/users/Ganryuu/following{/other_user}", "gists_url": "https://api.github.com/users/Ganryuu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Ganryuu", "id": 63643948, "login": "Ganryuu", "node_id": "MDQ6VXNlcjYzNjQzOTQ4", "organizations_url": "https://api.github.com/users/Ganryuu/orgs", "received_events_url": "https://api.github.com/users/Ganryuu/received_events", "repos_url": "https://api.github.com/users/Ganryuu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Ganryuu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ganryuu/subscriptions", "type": "User", "url": "https://api.github.com/users/Ganryuu", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6035). All of your documentation changes will be reflected on that endpoint." ]
2023-07-14T15:42:37Z
2023-07-19T19:41:35Z
null
NONE
null
null
null
__repr__ and _repr_html_ now both are similar to that of Polars
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6035/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6035/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6035.diff", "html_url": "https://github.com/huggingface/datasets/pull/6035", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6035.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6035" }
https://api.github.com/repos/huggingface/datasets/issues/5576
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5576/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5576/comments
https://api.github.com/repos/huggingface/datasets/issues/5576/events
https://github.com/huggingface/datasets/issues/5576
1,598,582,744
I_kwDODunzps5fSG_Y
5,576
I was getting a similar error `pyarrow.lib.ArrowInvalid: Integer value 528 not in range: -128 to 127` - AFAICT, this is because the type specified for `reddit_scores` is `datasets.Sequence(datasets.Value("int8"))`, but the actual values can be well outside the max range for 8-bit integers.
{ "avatar_url": "https://avatars.githubusercontent.com/u/5126316?v=4", "events_url": "https://api.github.com/users/wjfwzzc/events{/privacy}", "followers_url": "https://api.github.com/users/wjfwzzc/followers", "following_url": "https://api.github.com/users/wjfwzzc/following{/other_user}", "gists_url": "https://api.github.com/users/wjfwzzc/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/wjfwzzc", "id": 5126316, "login": "wjfwzzc", "node_id": "MDQ6VXNlcjUxMjYzMTY=", "organizations_url": "https://api.github.com/users/wjfwzzc/orgs", "received_events_url": "https://api.github.com/users/wjfwzzc/received_events", "repos_url": "https://api.github.com/users/wjfwzzc/repos", "site_admin": false, "starred_url": "https://api.github.com/users/wjfwzzc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wjfwzzc/subscriptions", "type": "User", "url": "https://api.github.com/users/wjfwzzc", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Duplicated issue." ]
2023-02-24T12:57:49Z
2023-02-24T12:58:31Z
2023-02-24T12:58:18Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
I was getting a similar error `pyarrow.lib.ArrowInvalid: Integer value 528 not in range: -128 to 127` - AFAICT, this is because the type specified for `reddit_scores` is `datasets.Sequence(datasets.Value("int8"))`, but the actual values can be well outside the max range for 8-bit integers. I worked around this by downloading the `the_pile_openwebtext2.py` and editing it to use local files and drop reddit scores as a column (not needed for my purposes). _Originally posted by @tc-wolf in https://github.com/huggingface/datasets/issues/3053#issuecomment-1281392422_
{ "avatar_url": "https://avatars.githubusercontent.com/u/5126316?v=4", "events_url": "https://api.github.com/users/wjfwzzc/events{/privacy}", "followers_url": "https://api.github.com/users/wjfwzzc/followers", "following_url": "https://api.github.com/users/wjfwzzc/following{/other_user}", "gists_url": "https://api.github.com/users/wjfwzzc/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/wjfwzzc", "id": 5126316, "login": "wjfwzzc", "node_id": "MDQ6VXNlcjUxMjYzMTY=", "organizations_url": "https://api.github.com/users/wjfwzzc/orgs", "received_events_url": "https://api.github.com/users/wjfwzzc/received_events", "repos_url": "https://api.github.com/users/wjfwzzc/repos", "site_admin": false, "starred_url": "https://api.github.com/users/wjfwzzc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wjfwzzc/subscriptions", "type": "User", "url": "https://api.github.com/users/wjfwzzc", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5576/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5576/timeline
null
not_planned
null
null
https://api.github.com/repos/huggingface/datasets/issues/5598
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5598/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5598/comments
https://api.github.com/repos/huggingface/datasets/issues/5598/events
https://github.com/huggingface/datasets/pull/5598
1,605,018,478
PR_kwDODunzps5LCMiX
5,598
Fix push_to_hub with no dataset_infos
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008823 / 0.011353 (-0.002529) | 0.004738 / 0.011008 (-0.006270) | 0.102338 / 0.038508 (0.063830) | 0.030603 / 0.023109 (0.007494) | 0.302995 / 0.275898 (0.027097) | 0.362080 / 0.323480 (0.038600) | 0.007096 / 0.007986 (-0.000889) | 0.003493 / 0.004328 (-0.000835) | 0.079129 / 0.004250 (0.074878) | 0.037966 / 0.037052 (0.000914) | 0.310412 / 0.258489 (0.051923) | 0.346740 / 0.293841 (0.052899) | 0.033795 / 0.128546 (-0.094751) | 0.011595 / 0.075646 (-0.064051) | 0.325189 / 0.419271 (-0.094083) | 0.041679 / 0.043533 (-0.001854) | 0.302339 / 0.255139 (0.047200) | 0.322519 / 0.283200 (0.039319) | 0.089058 / 0.141683 (-0.052625) | 1.496223 / 1.452155 (0.044068) | 1.512562 / 1.492716 (0.019845) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.009298 / 0.018006 (-0.008709) | 0.406726 / 0.000490 (0.406236) | 0.003753 / 0.000200 (0.003553) | 0.000082 / 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023327 / 0.037411 (-0.014084) | 0.098175 / 0.014526 (0.083649) | 0.106040 / 0.176557 (-0.070516) | 0.151934 / 0.737135 (-0.585201) | 0.108465 / 0.296338 (-0.187873) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.419073 / 0.215209 (0.203864) | 4.188012 / 2.077655 (2.110358) | 1.857667 / 1.504120 (0.353547) | 1.664124 / 1.541195 (0.122929) | 1.704341 / 1.468490 (0.235851) | 0.699671 / 4.584777 (-3.885106) | 3.391110 / 3.745712 (-0.354602) | 1.871136 / 5.269862 (-3.398725) | 1.176794 / 4.565676 (-3.388882) | 0.083322 / 0.424275 (-0.340953) | 0.012450 / 0.007607 (0.004843) | 0.525058 / 0.226044 (0.299014) | 5.265425 / 2.268929 (2.996497) | 2.320672 / 55.444624 (-53.123952) | 1.964806 / 6.876477 (-4.911671) | 2.027055 / 2.142072 (-0.115017) | 0.819768 / 4.805227 (-3.985459) | 0.149638 / 6.500664 (-6.351026) | 0.064774 / 0.075469 (-0.010695) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.204575 / 1.841788 (-0.637212) | 13.651878 / 8.074308 (5.577570) | 13.751973 / 10.191392 (3.560581) | 0.154781 / 0.680424 (-0.525643) | 0.028887 / 0.534201 (-0.505314) | 0.404905 / 0.579283 (-0.174379) | 0.411320 / 0.434364 (-0.023043) | 0.485026 / 0.540337 (-0.055311) | 0.579690 / 1.386936 (-0.807246) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006615 / 0.011353 (-0.004737) | 0.004606 / 0.011008 (-0.006402) | 0.076099 / 0.038508 (0.037591) | 0.027247 / 0.023109 (0.004137) | 0.360731 / 0.275898 (0.084833) | 0.393688 / 0.323480 (0.070208) | 0.005079 / 0.007986 (-0.002906) | 0.003345 / 0.004328 (-0.000984) | 0.077184 / 0.004250 (0.072934) | 0.037850 / 0.037052 (0.000797) | 0.379738 / 0.258489 (0.121249) | 0.400474 / 0.293841 (0.106633) | 0.031581 / 0.128546 (-0.096966) | 0.011508 / 0.075646 (-0.064138) | 0.084966 / 0.419271 (-0.334306) | 0.041740 / 0.043533 (-0.001793) | 0.349887 / 0.255139 (0.094748) | 0.384405 / 0.283200 (0.101205) | 0.089022 / 0.141683 (-0.052661) | 1.503448 / 1.452155 (0.051293) | 1.564870 / 1.492716 (0.072154) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.233581 / 0.018006 (0.215574) | 0.413819 / 0.000490 (0.413330) | 0.000398 / 0.000200 (0.000198) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024805 / 0.037411 (-0.012607) | 0.101348 / 0.014526 (0.086822) | 0.108701 / 0.176557 (-0.067856) | 0.160011 / 0.737135 (-0.577124) | 0.111696 / 0.296338 (-0.184642) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436303 / 0.215209 (0.221094) | 4.368684 / 2.077655 (2.291029) | 2.082366 / 1.504120 (0.578247) | 1.888108 / 1.541195 (0.346913) | 1.958295 / 1.468490 (0.489804) | 0.700858 / 4.584777 (-3.883919) | 3.408321 / 3.745712 (-0.337391) | 1.872960 / 5.269862 (-3.396902) | 1.165116 / 4.565676 (-3.400560) | 0.083556 / 0.424275 (-0.340719) | 0.012348 / 0.007607 (0.004741) | 0.536551 / 0.226044 (0.310506) | 5.359974 / 2.268929 (3.091045) | 2.539043 / 55.444624 (-52.905581) | 2.200314 / 6.876477 (-4.676162) | 2.222051 / 2.142072 (0.079979) | 0.808567 / 4.805227 (-3.996661) | 0.151222 / 6.500664 (-6.349442) | 0.066351 / 0.075469 (-0.009118) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.265502 / 1.841788 (-0.576286) | 13.692066 / 8.074308 (5.617758) | 13.124507 / 10.191392 (2.933115) | 0.129545 / 0.680424 (-0.550879) | 0.016827 / 0.534201 (-0.517374) | 0.380326 / 0.579283 (-0.198957) | 0.387268 / 0.434364 (-0.047096) | 0.463722 / 0.540337 (-0.076616) | 0.553681 / 1.386936 (-0.833255) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#6569014a9948eab7d031a3587405e64ba92d6c59 \"CML watermark\")\n" ]
2023-03-01T13:54:06Z
2023-03-02T13:47:13Z
2023-03-02T13:40:17Z
MEMBER
null
null
null
As reported in https://github.com/vijaydwivedi75/lrgb/issues/10, `push_to_hub` fails if the remote repository already exists and has a README.md without `dataset_info` in the YAML tags cc @clefourrier
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5598/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5598/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5598.diff", "html_url": "https://github.com/huggingface/datasets/pull/5598", "merged_at": "2023-03-02T13:40:17Z", "patch_url": "https://github.com/huggingface/datasets/pull/5598.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5598" }
https://api.github.com/repos/huggingface/datasets/issues/6655
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6655/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6655/comments
https://api.github.com/repos/huggingface/datasets/issues/6655/events
https://github.com/huggingface/datasets/issues/6655
2,127,020,042
I_kwDODunzps5-x8AK
6,655
Cannot load the dataset go_emotions
{ "avatar_url": "https://avatars.githubusercontent.com/u/688324?v=4", "events_url": "https://api.github.com/users/arame/events{/privacy}", "followers_url": "https://api.github.com/users/arame/followers", "following_url": "https://api.github.com/users/arame/following{/other_user}", "gists_url": "https://api.github.com/users/arame/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/arame", "id": 688324, "login": "arame", "node_id": "MDQ6VXNlcjY4ODMyNA==", "organizations_url": "https://api.github.com/users/arame/orgs", "received_events_url": "https://api.github.com/users/arame/received_events", "repos_url": "https://api.github.com/users/arame/repos", "site_admin": false, "starred_url": "https://api.github.com/users/arame/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arame/subscriptions", "type": "User", "url": "https://api.github.com/users/arame", "user_view_type": "public" }
[]
open
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
null
[ "Thanks for reporting, @arame.\r\n\r\nI guess you have an old version of `transformers` (that submodule is present in `transformers` since version 3.0.1, since nearly 4 years ago). If you update it, the error should disappear:\r\n```shell\r\npip install -U transformers\r\n```\r\n\r\nOn the other hand, I am wondering: does it make sense to use `transformers` in this case, even if we don't need it to load the `go_emotions` dataset (already converted to Parquet files)?\r\n- Maybe @mariosasko can give some insight, as he included these code lines:\r\n - #6454\r\n\r\nhttps://github.com/huggingface/datasets/blob/9751fb14594d354e952f0ebdfaf31cb203b011e7/src/datasets/utils/_dill.py#L60-L63\r\n", "The linked code lazily registers a custom reducer for `transformers.PreTrainedTokenizerBase` only if `transformers` have already been imported (imports are expensive, so we check `sys.modules`).\r\n\r\nHowever, the logic does not account for `transformers<3`, so we should add a version check to fix that.", "> The linked code lazily registers a custom reducer for `transformers.PreTrainedTokenizerBase` only if `transformers` have already been imported (imports are expensive, so we check `sys.modules`).\r\n> \r\n> However, the logic does not account for `transformers<3`, so we should add a version check to fix that.\r\n\r\nThank you for that Mario. Would this fix solve the problem and do you have any idea when it will be done? \r\nI tried the pip install suggested by Albert and it made no difference.", "I tried running the code today and the problem appears to be fixed." ]
2024-02-09T12:15:39Z
2024-02-12T09:35:55Z
null
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug When I run the following code I get an exception; `go_emotions = load_dataset("go_emotions")` > AttributeError Traceback (most recent call last) Cell In[6], [line 1](vscode-notebook-cell:?execution_count=6&line=1) ----> [1](vscode-notebook-cell:?execution_count=6&line=1) go_emotions = load_dataset("go_emotions") [2](vscode-notebook-cell:?execution_count=6&line=2) data = go_emotions.data File [c:\Users\hijik\anaconda3\Lib\site-packages\datasets\load.py:2523](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2523), in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs) [2518](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2518) verification_mode = VerificationMode( [2519](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2519) (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS [2520](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2520) ) [2522](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2522) # Create a dataset builder -> [2523](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2523) builder_instance = load_dataset_builder( [2524](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2524) path=path, [2525](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2525) name=name, [2526](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2526) data_dir=data_dir, [2527](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2527) data_files=data_files, [2528](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2528) cache_dir=cache_dir, [2529](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2529) features=features, [2530](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2530) download_config=download_config, [2531](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2531) download_mode=download_mode, [2532](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2532) revision=revision, [2533](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2533) token=token, [2534](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2534) storage_options=storage_options, [2535](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2535) trust_remote_code=trust_remote_code, [2536](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2536) _require_default_config_name=name is None, ... ---> [63](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/utils/_dill.py:63) if issubclass(obj_type, transformers.PreTrainedTokenizerBase): [64](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/utils/_dill.py:64) pklregister(obj_type)(_save_transformersPreTrainedTokenizerBase) [66](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/utils/_dill.py:66) # Unwrap `torch.compile`-ed functions AttributeError: module 'transformers' has no attribute 'PreTrainedTokenizerBase' Output is truncated. View as a [scrollable element](command:cellOutput.enableScrolling?10bc0728-6947-456e-9a3e-f056872b04c6) or open in a [text editor](command:workbench.action.openLargeOutput?10bc0728-6947-456e-9a3e-f056872b04c6). Adjust cell output [settings](command:workbench.action.openSettings?%5B%22%40tag%3AnotebookOutputLayout%22%5D)... ### Steps to reproduce the bug ``` from datasets import load_dataset go_emotions = load_dataset("go_emotions") ``` ### Expected behavior Should simply load the variable with the data from the file ### Environment info Copy-and-paste the text below in your GitHub issue. - `datasets` version: 2.16.1 - Platform: Windows-10-10.0.22631-SP0 - Python version: 3.11.4 - `huggingface_hub` version: 0.20.3 - PyArrow version: 11.0.0 - Pandas version: 1.5.3 - `fsspec` version: 2023.10.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6655/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6655/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/5258
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5258/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5258/comments
https://api.github.com/repos/huggingface/datasets/issues/5258/events
https://github.com/huggingface/datasets/issues/5258
1,453,516,636
I_kwDODunzps5Woudc
5,258
Restore order of split names in dataset_info for canonical datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
null
[ "The bulk edit is running...\r\n\r\nSee for example: \r\n- A single config: https://huggingface.co/datasets/acronym_identification/discussions/2\r\n- Multiple configs: https://huggingface.co/datasets/babi_qa/discussions/1", "TODO: Add \"dataset_info\" YAML metadata to:\r\n- [x] \"chr_en\" has no metadata JSON file, nor \"dataset_info\" YAML tag in its card\r\n - Fixing PR: https://huggingface.co/datasets/chr_en/discussions/1 \r\n- [x] \"conll2000\" has no metadata JSON file, but it has \"dataset_info\" YAML tag in its card\r\n- [x] \"crime_and_punish\" has no metadata JSON file, but it has \"dataset_info\" YAML tag in its card\r\n- [x] \"dart\" has no metadata JSON file, but it has \"dataset_info\" YAML tag in its card\r\n- [x] \"iwslt2017\" has no metadata JSON file, but it has \"dataset_info\" YAML tag in its card\r\n- [ ] \"mc4\" has no metadata JSON file, nor \"dataset_info\" YAML tag in its card\r\n- [ ] \"the_pile\" has no metadata JSON file, nor \"dataset_info\" YAML tag in its card\r\n- [ ] \"timit_asr\" has no metadata JSON file, nor \"dataset_info\" YAML tag in its card", "The bulk edit is finished." ]
2022-11-17T15:13:15Z
2023-02-16T09:49:05Z
2022-11-19T06:51:37Z
MEMBER
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
After a bulk edit of canonical datasets to create the YAML `dataset_info` metadata, the split names were accidentally sorted alphabetically. See for example: - https://huggingface.co/datasets/bc2gm_corpus/commit/2384629484401ecf4bb77cd808816719c424e57c Note that this order is the one appearing in the preview of the datasets. I'm making a bulk edit to align the order of the splits appearing in the metadata info with the order appearing in the loading script. Related to: - #5202
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5258/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5258/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/6755
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6755/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6755/comments
https://api.github.com/repos/huggingface/datasets/issues/6755/events
https://github.com/huggingface/datasets/issues/6755
2,204,573,289
I_kwDODunzps6DZx5p
6,755
Small typo on the documentation
{ "avatar_url": "https://avatars.githubusercontent.com/u/4337024?v=4", "events_url": "https://api.github.com/users/fostiropoulos/events{/privacy}", "followers_url": "https://api.github.com/users/fostiropoulos/followers", "following_url": "https://api.github.com/users/fostiropoulos/following{/other_user}", "gists_url": "https://api.github.com/users/fostiropoulos/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/fostiropoulos", "id": 4337024, "login": "fostiropoulos", "node_id": "MDQ6VXNlcjQzMzcwMjQ=", "organizations_url": "https://api.github.com/users/fostiropoulos/orgs", "received_events_url": "https://api.github.com/users/fostiropoulos/received_events", "repos_url": "https://api.github.com/users/fostiropoulos/repos", "site_admin": false, "starred_url": "https://api.github.com/users/fostiropoulos/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fostiropoulos/subscriptions", "type": "User", "url": "https://api.github.com/users/fostiropoulos", "user_view_type": "public" }
[ { "color": "7057ff", "default": true, "description": "Good for newcomers", "id": 1935892877, "name": "good first issue", "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/63234112?v=4", "events_url": "https://api.github.com/users/JINO-ROHIT/events{/privacy}", "followers_url": "https://api.github.com/users/JINO-ROHIT/followers", "following_url": "https://api.github.com/users/JINO-ROHIT/following{/other_user}", "gists_url": "https://api.github.com/users/JINO-ROHIT/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JINO-ROHIT", "id": 63234112, "login": "JINO-ROHIT", "node_id": "MDQ6VXNlcjYzMjM0MTEy", "organizations_url": "https://api.github.com/users/JINO-ROHIT/orgs", "received_events_url": "https://api.github.com/users/JINO-ROHIT/received_events", "repos_url": "https://api.github.com/users/JINO-ROHIT/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JINO-ROHIT/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JINO-ROHIT/subscriptions", "type": "User", "url": "https://api.github.com/users/JINO-ROHIT", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/63234112?v=4", "events_url": "https://api.github.com/users/JINO-ROHIT/events{/privacy}", "followers_url": "https://api.github.com/users/JINO-ROHIT/followers", "following_url": "https://api.github.com/users/JINO-ROHIT/following{/other_user}", "gists_url": "https://api.github.com/users/JINO-ROHIT/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JINO-ROHIT", "id": 63234112, "login": "JINO-ROHIT", "node_id": "MDQ6VXNlcjYzMjM0MTEy", "organizations_url": "https://api.github.com/users/JINO-ROHIT/orgs", "received_events_url": "https://api.github.com/users/JINO-ROHIT/received_events", "repos_url": "https://api.github.com/users/JINO-ROHIT/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JINO-ROHIT/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JINO-ROHIT/subscriptions", "type": "User", "url": "https://api.github.com/users/JINO-ROHIT", "user_view_type": "public" } ]
null
[ "Thanks for reporting @fostiropoulos! I've edited your comment to fix the link to the problematic line.\r\n", "@mariosasko can i take this up?", "#self-assign" ]
2024-03-24T21:47:52Z
2024-04-02T14:01:19Z
2024-04-02T14:01:19Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug There is a small typo on https://github.com/huggingface/datasets/blob/d5468836fe94e8be1ae093397dd43d4a2503b926/src/datasets/dataset_dict.py#L938 It should be `caching is enabled`. ### Steps to reproduce the bug Please visit https://github.com/huggingface/datasets/blob/d5468836fe94e8be1ae093397dd43d4a2503b926/src/datasets/dataset_dict.py#L938 ### Expected behavior `caching is enabled` ### Environment info - `datasets` version: 2.17.1 - Platform: Linux-5.15.0-101-generic-x86_64-with-glibc2.35 - Python version: 3.11.7 - `huggingface_hub` version: 0.20.3 - PyArrow version: 15.0.0 - Pandas version: 2.2.1 - `fsspec` version: 2023.10.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6755/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6755/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/4642
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4642/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4642/comments
https://api.github.com/repos/huggingface/datasets/issues/4642/events
https://github.com/huggingface/datasets/issues/4642
1,295,748,083
I_kwDODunzps5NO4vz
4,642
Streaming issue for ccdv/pubmed-summarization
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
null
[ "Thanks for reporting @lewtun.\r\n\r\nI confirm there is an issue with streaming: it does not stream locally. ", "Oh, after investigation, the source of the issue is in the Hub dataset loading script.\r\n\r\nI'm opening a PR on the Hub dataset.", "I've opened a PR on their Hub dataset to support streaming: https://huggingface.co/datasets/ccdv/pubmed-summarization/discussions/2" ]
2022-07-06T12:13:07Z
2022-07-06T14:17:34Z
2022-07-06T14:17:34Z
MEMBER
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Link https://huggingface.co/datasets/ccdv/pubmed-summarization ### Description This was reported by a [user of AutoTrain Evaluate](https://huggingface.co/spaces/autoevaluate/model-evaluator/discussions/7). It seems like streaming doesn't work due to the way the dataset loading script is defined? ``` Status code: 400 Exception: FileNotFoundError Message: https://huggingface.co/datasets/ccdv/pubmed-summarization/resolve/main/train.zip/train.txt ``` ### Owner No
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4642/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4642/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/5638
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5638/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5638/comments
https://api.github.com/repos/huggingface/datasets/issues/5638/events
https://github.com/huggingface/datasets/issues/5638
1,625,564,471
I_kwDODunzps5g5CU3
5,638
xPath to implement all operations for Path
{ "avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4", "events_url": "https://api.github.com/users/thomasw21/events{/privacy}", "followers_url": "https://api.github.com/users/thomasw21/followers", "following_url": "https://api.github.com/users/thomasw21/following{/other_user}", "gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomasw21", "id": 24695242, "login": "thomasw21", "node_id": "MDQ6VXNlcjI0Njk1MjQy", "organizations_url": "https://api.github.com/users/thomasw21/orgs", "received_events_url": "https://api.github.com/users/thomasw21/received_events", "repos_url": "https://api.github.com/users/thomasw21/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions", "type": "User", "url": "https://api.github.com/users/thomasw21", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ " I think https://github.com/fsspec/universal_pathlib is the project you are looking for.\r\n\r\n`xPath` has the methods often used in dataset scripts, and `mkdir` is not one of them (`dl_manager`'s role is to \"interact\" with the file system, so using `mkdir` is discouraged).", "Right is there a difference between UPath and xPath? Typically is xPath less well implemented compared to Upath, ie missing some implementations of some methods? Or are there methods in xPath that are not implemented with UPath?", "`xPath` is an internal component (it doesn't have a leading underscore in the name, but it should) not meant to be used outside of `datasets`, and it's only tested on HTTP URLs, not S3.\r\n\r\n", "Okay I understand that xPath won't support my usecase. What I was perhaps getting to is why not use UPath in `datasets` instead of `xPath` if UPath seems to have strictly more robust implementations.", "It seems like `universal_pathlib` does not support `fsspec` URL chaining (`::` is the chaining symbol) and \"compression\" filesystems (e.g., `zip`), but this is what we need to access and stream files from within an archive (e.g., we want to stream URLs such as this one: `zip://data.parquet::https://www.dummyurl.com/archive.zip`)" ]
2023-03-15T13:47:11Z
2023-03-17T13:21:12Z
2023-03-17T13:21:12Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Feature request Current xPath implementation is a great extension of Path in order to work with remote objects. However some methods such as `mkdir` are not implemented correctly. It should instead rely on `fsspec` methods, instead of defaulting do `Path` methods which only work locally. ### Motivation I'm using xPath to interact with remote objects. ### Your contribution I could try to make a PR. I'm a bit unfamiliar with chaining right now.
{ "avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4", "events_url": "https://api.github.com/users/thomasw21/events{/privacy}", "followers_url": "https://api.github.com/users/thomasw21/followers", "following_url": "https://api.github.com/users/thomasw21/following{/other_user}", "gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomasw21", "id": 24695242, "login": "thomasw21", "node_id": "MDQ6VXNlcjI0Njk1MjQy", "organizations_url": "https://api.github.com/users/thomasw21/orgs", "received_events_url": "https://api.github.com/users/thomasw21/received_events", "repos_url": "https://api.github.com/users/thomasw21/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions", "type": "User", "url": "https://api.github.com/users/thomasw21", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5638/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5638/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/6854
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6854/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6854/comments
https://api.github.com/repos/huggingface/datasets/issues/6854/events
https://github.com/huggingface/datasets/issues/6854
2,274,767,686
I_kwDODunzps6HljNG
6,854
Wrong example of usage when config name is missing for community script-datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
null
[]
2024-05-02T06:59:39Z
2024-05-03T15:51:59Z
2024-05-03T15:51:58Z
MEMBER
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
As reported by @Wauplin, when loading a community dataset with script, there is a bug in the example of usage of the error message if the dataset has multiple configs (and no default config) and the user does not pass any config. For example: ```python >>> ds = load_dataset("google/fleurs") ValueError: Config name is missing. Please pick one among the available configs: ['af_za', 'am_et', 'ar_eg', 'as_in', 'ast_es', 'az_az', 'be_by', 'bg_bg', 'bn_in', 'bs_ba', 'ca_es', 'ceb_ph', 'ckb_iq', 'cmn_hans_cn', 'cs_cz', 'cy_gb', 'da_dk', 'de_de', 'el_gr', 'en_us', 'es_419', 'et_ee', 'fa_ir', 'ff_sn', 'fi_fi', 'fil_ph', 'fr_fr', 'ga_ie', 'gl_es', 'gu_in', 'ha_ng', 'he_il', 'hi_in', 'hr_hr', 'hu_hu', 'hy_am', 'id_id', 'ig_ng', 'is_is', 'it_it', 'ja_jp', 'jv_id', 'ka_ge', 'kam_ke', 'kea_cv', 'kk_kz', 'km_kh', 'kn_in', 'ko_kr', 'ky_kg', 'lb_lu', 'lg_ug', 'ln_cd', 'lo_la', 'lt_lt', 'luo_ke', 'lv_lv', 'mi_nz', 'mk_mk', 'ml_in', 'mn_mn', 'mr_in', 'ms_my', 'mt_mt', 'my_mm', 'nb_no', 'ne_np', 'nl_nl', 'nso_za', 'ny_mw', 'oc_fr', 'om_et', 'or_in', 'pa_in', 'pl_pl', 'ps_af', 'pt_br', 'ro_ro', 'ru_ru', 'sd_in', 'sk_sk', 'sl_si', 'sn_zw', 'so_so', 'sr_rs', 'sv_se', 'sw_ke', 'ta_in', 'te_in', 'tg_tj', 'th_th', 'tr_tr', 'uk_ua', 'umb_ao', 'ur_pk', 'uz_uz', 'vi_vn', 'wo_sn', 'xh_za', 'yo_ng', 'yue_hant_hk', 'zu_za', 'all'] Example of usage: `load_dataset('fleurs', 'af_za')` ``` Note the example of usage in the error message suggests loading "fleurs" instead of "google/fleurs".
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6854/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6854/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/5132
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5132/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5132/comments
https://api.github.com/repos/huggingface/datasets/issues/5132/events
https://github.com/huggingface/datasets/issues/5132
1,413,607,306
I_kwDODunzps5UQe-K
5,132
Depracate `num_proc` parameter in `DownloadManager.extract`
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "7057ff", "default": true, "description": "Good for newcomers", "id": 1935892877, "name": "good first issue", "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue" }, { "color": "DF8D62", "default": false, "description": "", "id": 4614514401, "name": "hacktoberfest", "node_id": "LA_kwDODunzps8AAAABEwvm4Q", "url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/114604338?v=4", "events_url": "https://api.github.com/users/ayushthe1/events{/privacy}", "followers_url": "https://api.github.com/users/ayushthe1/followers", "following_url": "https://api.github.com/users/ayushthe1/following{/other_user}", "gists_url": "https://api.github.com/users/ayushthe1/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ayushthe1", "id": 114604338, "login": "ayushthe1", "node_id": "U_kgDOBtS5Mg", "organizations_url": "https://api.github.com/users/ayushthe1/orgs", "received_events_url": "https://api.github.com/users/ayushthe1/received_events", "repos_url": "https://api.github.com/users/ayushthe1/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ayushthe1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ayushthe1/subscriptions", "type": "User", "url": "https://api.github.com/users/ayushthe1", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/114604338?v=4", "events_url": "https://api.github.com/users/ayushthe1/events{/privacy}", "followers_url": "https://api.github.com/users/ayushthe1/followers", "following_url": "https://api.github.com/users/ayushthe1/following{/other_user}", "gists_url": "https://api.github.com/users/ayushthe1/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ayushthe1", "id": 114604338, "login": "ayushthe1", "node_id": "U_kgDOBtS5Mg", "organizations_url": "https://api.github.com/users/ayushthe1/orgs", "received_events_url": "https://api.github.com/users/ayushthe1/received_events", "repos_url": "https://api.github.com/users/ayushthe1/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ayushthe1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ayushthe1/subscriptions", "type": "User", "url": "https://api.github.com/users/ayushthe1", "user_view_type": "public" } ]
null
[ "I can take this! #self-assign", "#self-assign", "@lazarust i'm already working on this issue :smile: ", "#self-assign", "hey @mariosasko , i made a pr for this issue. Could you please review it." ]
2022-10-18T17:41:05Z
2022-10-25T15:56:46Z
2022-10-25T15:56:46Z
COLLABORATOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
The `num_proc` parameter is only present in `DownloadManager.extract` but not in `StreamingDownloadManager.extract`, making it impossible to support streaming in the dataset scripts that use it (`openwebtext` and `the_pile_stack_exchange`). We can avoid this situation by deprecating this parameter and passing `DownloadConfig`'s `num_proc` to `map_nested` instead, as it's done in `DownloadManager.download`.
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5132/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5132/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/6075
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6075/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6075/comments
https://api.github.com/repos/huggingface/datasets/issues/6075/events
https://github.com/huggingface/datasets/issues/6075
1,822,341,398
I_kwDODunzps5snrkW
6,075
Error loading music files using `load_dataset`
{ "avatar_url": "https://avatars.githubusercontent.com/u/56069179?v=4", "events_url": "https://api.github.com/users/susnato/events{/privacy}", "followers_url": "https://api.github.com/users/susnato/followers", "following_url": "https://api.github.com/users/susnato/following{/other_user}", "gists_url": "https://api.github.com/users/susnato/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/susnato", "id": 56069179, "login": "susnato", "node_id": "MDQ6VXNlcjU2MDY5MTc5", "organizations_url": "https://api.github.com/users/susnato/orgs", "received_events_url": "https://api.github.com/users/susnato/received_events", "repos_url": "https://api.github.com/users/susnato/repos", "site_admin": false, "starred_url": "https://api.github.com/users/susnato/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/susnato/subscriptions", "type": "User", "url": "https://api.github.com/users/susnato", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "This code behaves as expected on my local machine or in Colab. Which version of `soundfile` do you have installed? MP3 requires `soundfile>=0.12.1`.", "I upgraded the `soundfile` and it's working now! \r\nThanks @mariosasko for the help!" ]
2023-07-26T12:44:05Z
2023-07-26T13:08:08Z
2023-07-26T13:08:08Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug I tried to load a music file using `datasets.load_dataset()` from the repository - https://huggingface.co/datasets/susnato/pop2piano_real_music_test I got the following error - ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2803, in __getitem__ return self._getitem(key) File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2788, in _getitem formatted_output = format_table( File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 629, in format_table return formatter(pa_table, query_type=query_type) File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 398, in __call__ return self.format_column(pa_table) File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 442, in format_column column = self.python_features_decoder.decode_column(column, pa_table.column_names[0]) File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 218, in decode_column return self.features.decode_column(column, column_name) if self.features else column File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/features/features.py", line 1924, in decode_column [decode_nested_example(self[column_name], value) if value is not None else None for value in column] File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/features/features.py", line 1924, in <listcomp> [decode_nested_example(self[column_name], value) if value is not None else None for value in column] File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/features/features.py", line 1325, in decode_nested_example return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/features/audio.py", line 184, in decode_example array, sampling_rate = sf.read(f) File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/soundfile.py", line 372, in read with SoundFile(file, 'r', samplerate, channels, File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/soundfile.py", line 740, in __init__ self._file = self._open(file, mode_int, closefd) File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/soundfile.py", line 1264, in _open _error_check(_snd.sf_error(file_ptr), File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/soundfile.py", line 1455, in _error_check raise RuntimeError(prefix + _ffi.string(err_str).decode('utf-8', 'replace')) RuntimeError: Error opening <_io.BufferedReader name='/home/susnato/.cache/huggingface/datasets/downloads/d2b09cb974b967b13f91553297c40c0f02f3c0d4c8356350743598ff48d6f29e'>: Format not recognised. ``` ### Steps to reproduce the bug Code to reproduce the error - ```python from datasets import load_dataset ds = load_dataset("susnato/pop2piano_real_music_test", split="test") print(ds[0]) ``` ### Expected behavior I should be able to read the music file without any error. ### Environment info - `datasets` version: 2.14.0 - Platform: Linux-5.19.0-50-generic-x86_64-with-glibc2.35 - Python version: 3.9.16 - Huggingface_hub version: 0.15.1 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "avatar_url": "https://avatars.githubusercontent.com/u/56069179?v=4", "events_url": "https://api.github.com/users/susnato/events{/privacy}", "followers_url": "https://api.github.com/users/susnato/followers", "following_url": "https://api.github.com/users/susnato/following{/other_user}", "gists_url": "https://api.github.com/users/susnato/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/susnato", "id": 56069179, "login": "susnato", "node_id": "MDQ6VXNlcjU2MDY5MTc5", "organizations_url": "https://api.github.com/users/susnato/orgs", "received_events_url": "https://api.github.com/users/susnato/received_events", "repos_url": "https://api.github.com/users/susnato/repos", "site_admin": false, "starred_url": "https://api.github.com/users/susnato/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/susnato/subscriptions", "type": "User", "url": "https://api.github.com/users/susnato", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6075/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6075/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/6473
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6473/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6473/comments
https://api.github.com/repos/huggingface/datasets/issues/6473/events
https://github.com/huggingface/datasets/pull/6473
2,026,495,084
PR_kwDODunzps5hMbvz
6,473
Fix CI quality
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6473). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005270 / 0.011353 (-0.006083) | 0.003471 / 0.011008 (-0.007537) | 0.061942 / 0.038508 (0.023434) | 0.052671 / 0.023109 (0.029562) | 0.250541 / 0.275898 (-0.025357) | 0.270677 / 0.323480 (-0.052803) | 0.002933 / 0.007986 (-0.005053) | 0.003264 / 0.004328 (-0.001064) | 0.048055 / 0.004250 (0.043804) | 0.037459 / 0.037052 (0.000407) | 0.254926 / 0.258489 (-0.003563) | 0.292547 / 0.293841 (-0.001294) | 0.027959 / 0.128546 (-0.100587) | 0.010762 / 0.075646 (-0.064884) | 0.204961 / 0.419271 (-0.214310) | 0.035488 / 0.043533 (-0.008045) | 0.254102 / 0.255139 (-0.001037) | 0.273654 / 0.283200 (-0.009546) | 0.018126 / 0.141683 (-0.123556) | 1.082330 / 1.452155 (-0.369825) | 1.147179 / 1.492716 (-0.345538) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093223 / 0.018006 (0.075217) | 0.301912 / 0.000490 (0.301422) | 0.000219 / 0.000200 (0.000019) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018407 / 0.037411 (-0.019004) | 0.060412 / 0.014526 (0.045886) | 0.074063 / 0.176557 (-0.102494) | 0.118743 / 0.737135 (-0.618392) | 0.076484 / 0.296338 (-0.219854) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289929 / 0.215209 (0.074720) | 2.825096 / 2.077655 (0.747442) | 1.511444 / 1.504120 (0.007324) | 1.394812 / 1.541195 (-0.146383) | 1.419751 / 1.468490 (-0.048739) | 0.569995 / 4.584777 (-4.014782) | 2.402586 / 3.745712 (-1.343126) | 2.826223 / 5.269862 (-2.443639) | 1.751554 / 4.565676 (-2.814123) | 0.064266 / 0.424275 (-0.360009) | 0.005047 / 0.007607 (-0.002561) | 0.341513 / 0.226044 (0.115469) | 3.372106 / 2.268929 (1.103177) | 1.872693 / 55.444624 (-53.571931) | 1.588200 / 6.876477 (-5.288276) | 1.630800 / 2.142072 (-0.511272) | 0.654266 / 4.805227 (-4.150961) | 0.124292 / 6.500664 (-6.376372) | 0.042876 / 0.075469 (-0.032593) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.948406 / 1.841788 (-0.893382) | 11.652947 / 8.074308 (3.578639) | 10.218195 / 10.191392 (0.026803) | 0.128447 / 0.680424 (-0.551976) | 0.014092 / 0.534201 (-0.520109) | 0.287631 / 0.579283 (-0.291652) | 0.264843 / 0.434364 (-0.169521) | 0.329997 / 0.540337 (-0.210340) | 0.439597 / 1.386936 (-0.947339) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005418 / 0.011353 (-0.005935) | 0.003589 / 0.011008 (-0.007419) | 0.050074 / 0.038508 (0.011566) | 0.052566 / 0.023109 (0.029456) | 0.293447 / 0.275898 (0.017549) | 0.320518 / 0.323480 (-0.002962) | 0.004094 / 0.007986 (-0.003892) | 0.002690 / 0.004328 (-0.001639) | 0.048200 / 0.004250 (0.043949) | 0.040692 / 0.037052 (0.003640) | 0.297086 / 0.258489 (0.038597) | 0.323827 / 0.293841 (0.029986) | 0.029511 / 0.128546 (-0.099035) | 0.011079 / 0.075646 (-0.064568) | 0.058562 / 0.419271 (-0.360709) | 0.032897 / 0.043533 (-0.010636) | 0.297244 / 0.255139 (0.042105) | 0.316812 / 0.283200 (0.033612) | 0.018468 / 0.141683 (-0.123215) | 1.140948 / 1.452155 (-0.311207) | 1.195453 / 1.492716 (-0.297263) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092677 / 0.018006 (0.074671) | 0.300775 / 0.000490 (0.300285) | 0.000225 / 0.000200 (0.000025) | 0.000054 / 0.000054 (0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021617 / 0.037411 (-0.015794) | 0.077135 / 0.014526 (0.062610) | 0.079848 / 0.176557 (-0.096709) | 0.118475 / 0.737135 (-0.618661) | 0.081174 / 0.296338 (-0.215164) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294424 / 0.215209 (0.079215) | 2.863989 / 2.077655 (0.786334) | 1.590604 / 1.504120 (0.086484) | 1.474345 / 1.541195 (-0.066849) | 1.482120 / 1.468490 (0.013630) | 0.567829 / 4.584777 (-4.016948) | 2.493782 / 3.745712 (-1.251930) | 2.823460 / 5.269862 (-2.446402) | 1.732677 / 4.565676 (-2.833000) | 0.065518 / 0.424275 (-0.358757) | 0.004923 / 0.007607 (-0.002684) | 0.349313 / 0.226044 (0.123268) | 3.428618 / 2.268929 (1.159689) | 1.970641 / 55.444624 (-53.473983) | 1.655884 / 6.876477 (-5.220593) | 1.657151 / 2.142072 (-0.484921) | 0.661208 / 4.805227 (-4.144019) | 0.119129 / 6.500664 (-6.381535) | 0.040770 / 0.075469 (-0.034699) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.964865 / 1.841788 (-0.876923) | 12.050218 / 8.074308 (3.975910) | 10.458749 / 10.191392 (0.267357) | 0.141856 / 0.680424 (-0.538568) | 0.015091 / 0.534201 (-0.519109) | 0.288897 / 0.579283 (-0.290387) | 0.275343 / 0.434364 (-0.159021) | 0.328363 / 0.540337 (-0.211975) | 0.579243 / 1.386936 (-0.807693) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f7721021e284859ea0952444bae6300a0d00794f \"CML watermark\")\n" ]
2023-12-05T15:36:23Z
2023-12-05T18:14:50Z
2023-12-05T18:08:41Z
MEMBER
null
null
null
Fix #6472.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6473/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6473/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6473.diff", "html_url": "https://github.com/huggingface/datasets/pull/6473", "merged_at": "2023-12-05T18:08:41Z", "patch_url": "https://github.com/huggingface/datasets/pull/6473.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6473" }
https://api.github.com/repos/huggingface/datasets/issues/6101
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6101/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6101/comments
https://api.github.com/repos/huggingface/datasets/issues/6101/events
https://github.com/huggingface/datasets/pull/6101
1,828,469,648
PR_kwDODunzps5WwspW
6,101
Release 2.14.2
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006543 / 0.011353 (-0.004810) | 0.003894 / 0.011008 (-0.007115) | 0.084742 / 0.038508 (0.046234) | 0.072942 / 0.023109 (0.049833) | 0.310722 / 0.275898 (0.034824) | 0.346806 / 0.323480 (0.023326) | 0.005373 / 0.007986 (-0.002613) | 0.003270 / 0.004328 (-0.001059) | 0.064379 / 0.004250 (0.060128) | 0.054876 / 0.037052 (0.017824) | 0.316794 / 0.258489 (0.058305) | 0.350353 / 0.293841 (0.056512) | 0.030683 / 0.128546 (-0.097863) | 0.008275 / 0.075646 (-0.067371) | 0.288747 / 0.419271 (-0.130525) | 0.051892 / 0.043533 (0.008359) | 0.315060 / 0.255139 (0.059921) | 0.331664 / 0.283200 (0.048464) | 0.023334 / 0.141683 (-0.118349) | 1.499734 / 1.452155 (0.047579) | 1.542006 / 1.492716 (0.049290) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.210488 / 0.018006 (0.192482) | 0.462187 / 0.000490 (0.461697) | 0.001280 / 0.000200 (0.001080) | 0.000076 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027812 / 0.037411 (-0.009599) | 0.082492 / 0.014526 (0.067966) | 0.096504 / 0.176557 (-0.080053) | 0.158164 / 0.737135 (-0.578972) | 0.096678 / 0.296338 (-0.199661) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.403317 / 0.215209 (0.188108) | 4.008367 / 2.077655 (1.930713) | 2.033067 / 1.504120 (0.528947) | 1.869484 / 1.541195 (0.328290) | 1.947450 / 1.468490 (0.478960) | 0.494048 / 4.584777 (-4.090729) | 3.631673 / 3.745712 (-0.114039) | 5.322167 / 5.269862 (0.052306) | 3.125570 / 4.565676 (-1.440107) | 0.057341 / 0.424275 (-0.366934) | 0.007318 / 0.007607 (-0.000289) | 0.483990 / 0.226044 (0.257945) | 4.830573 / 2.268929 (2.561645) | 2.543267 / 55.444624 (-52.901358) | 2.217890 / 6.876477 (-4.658587) | 2.435111 / 2.142072 (0.293038) | 0.597920 / 4.805227 (-4.207307) | 0.132690 / 6.500664 (-6.367974) | 0.060160 / 0.075469 (-0.015309) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.247656 / 1.841788 (-0.594131) | 19.436984 / 8.074308 (11.362675) | 14.504249 / 10.191392 (4.312857) | 0.167444 / 0.680424 (-0.512980) | 0.018214 / 0.534201 (-0.515987) | 0.394790 / 0.579283 (-0.184493) | 0.413770 / 0.434364 (-0.020594) | 0.474290 / 0.540337 (-0.066048) | 0.646782 / 1.386936 (-0.740154) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006575 / 0.011353 (-0.004778) | 0.003924 / 0.011008 (-0.007084) | 0.064402 / 0.038508 (0.025893) | 0.072569 / 0.023109 (0.049460) | 0.361981 / 0.275898 (0.086083) | 0.398660 / 0.323480 (0.075180) | 0.005380 / 0.007986 (-0.002605) | 0.003355 / 0.004328 (-0.000974) | 0.065173 / 0.004250 (0.060923) | 0.057120 / 0.037052 (0.020067) | 0.366347 / 0.258489 (0.107858) | 0.402723 / 0.293841 (0.108882) | 0.031258 / 0.128546 (-0.097288) | 0.008499 / 0.075646 (-0.067147) | 0.070558 / 0.419271 (-0.348714) | 0.050089 / 0.043533 (0.006556) | 0.361280 / 0.255139 (0.106141) | 0.384497 / 0.283200 (0.101297) | 0.024789 / 0.141683 (-0.116893) | 1.492577 / 1.452155 (0.040422) | 1.572242 / 1.492716 (0.079525) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228054 / 0.018006 (0.210048) | 0.448317 / 0.000490 (0.447828) | 0.000368 / 0.000200 (0.000168) | 0.000056 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030575 / 0.037411 (-0.006836) | 0.088604 / 0.014526 (0.074078) | 0.099317 / 0.176557 (-0.077239) | 0.152455 / 0.737135 (-0.584680) | 0.100444 / 0.296338 (-0.195894) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.411876 / 0.215209 (0.196667) | 4.108187 / 2.077655 (2.030532) | 2.096371 / 1.504120 (0.592251) | 1.923532 / 1.541195 (0.382337) | 1.998345 / 1.468490 (0.529855) | 0.483853 / 4.584777 (-4.100924) | 3.622433 / 3.745712 (-0.123279) | 3.254430 / 5.269862 (-2.015431) | 2.044342 / 4.565676 (-2.521334) | 0.056756 / 0.424275 (-0.367519) | 0.007720 / 0.007607 (0.000113) | 0.487656 / 0.226044 (0.261612) | 4.882024 / 2.268929 (2.613096) | 2.585008 / 55.444624 (-52.859616) | 2.229251 / 6.876477 (-4.647225) | 2.408318 / 2.142072 (0.266246) | 0.617537 / 4.805227 (-4.187691) | 0.132102 / 6.500664 (-6.368562) | 0.061694 / 0.075469 (-0.013775) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.362077 / 1.841788 (-0.479711) | 19.750714 / 8.074308 (11.676406) | 14.545299 / 10.191392 (4.353907) | 0.168666 / 0.680424 (-0.511758) | 0.018606 / 0.534201 (-0.515595) | 0.394760 / 0.579283 (-0.184523) | 0.410030 / 0.434364 (-0.024334) | 0.464742 / 0.540337 (-0.075596) | 0.610881 / 1.386936 (-0.776055) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#53e8007baeff133aaad8cbb366196be18a5e57fd \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005836 / 0.011353 (-0.005517) | 0.003493 / 0.011008 (-0.007515) | 0.079877 / 0.038508 (0.041369) | 0.057299 / 0.023109 (0.034190) | 0.332945 / 0.275898 (0.057047) | 0.386615 / 0.323480 (0.063135) | 0.004437 / 0.007986 (-0.003548) | 0.002758 / 0.004328 (-0.001571) | 0.062668 / 0.004250 (0.058418) | 0.046135 / 0.037052 (0.009083) | 0.346160 / 0.258489 (0.087671) | 0.416720 / 0.293841 (0.122879) | 0.026678 / 0.128546 (-0.101868) | 0.007893 / 0.075646 (-0.067753) | 0.260427 / 0.419271 (-0.158845) | 0.044240 / 0.043533 (0.000707) | 0.328101 / 0.255139 (0.072963) | 0.380072 / 0.283200 (0.096872) | 0.020813 / 0.141683 (-0.120870) | 1.400202 / 1.452155 (-0.051952) | 1.475627 / 1.492716 (-0.017089) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.174479 / 0.018006 (0.156473) | 0.413810 / 0.000490 (0.413320) | 0.003059 / 0.000200 (0.002860) | 0.000212 / 0.000054 (0.000157) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023422 / 0.037411 (-0.013990) | 0.071519 / 0.014526 (0.056993) | 0.080555 / 0.176557 (-0.096001) | 0.143825 / 0.737135 (-0.593311) | 0.081182 / 0.296338 (-0.215157) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.406858 / 0.215209 (0.191648) | 4.161475 / 2.077655 (2.083820) | 1.991800 / 1.504120 (0.487680) | 1.811224 / 1.541195 (0.270030) | 1.828809 / 1.468490 (0.360318) | 0.504882 / 4.584777 (-4.079895) | 2.985010 / 3.745712 (-0.760703) | 3.984856 / 5.269862 (-1.285006) | 2.477936 / 4.565676 (-2.087740) | 0.057553 / 0.424275 (-0.366722) | 0.006436 / 0.007607 (-0.001172) | 0.488061 / 0.226044 (0.262016) | 4.805501 / 2.268929 (2.536573) | 2.446508 / 55.444624 (-52.998116) | 2.051406 / 6.876477 (-4.825071) | 2.177696 / 2.142072 (0.035623) | 0.588021 / 4.805227 (-4.217207) | 0.125118 / 6.500664 (-6.375546) | 0.060885 / 0.075469 (-0.014584) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.197130 / 1.841788 (-0.644658) | 17.867450 / 8.074308 (9.793142) | 13.536895 / 10.191392 (3.345503) | 0.137603 / 0.680424 (-0.542821) | 0.016706 / 0.534201 (-0.517495) | 0.327642 / 0.579283 (-0.251641) | 0.347201 / 0.434364 (-0.087163) | 0.379570 / 0.540337 (-0.160768) | 0.517825 / 1.386936 (-0.869111) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005769 / 0.011353 (-0.005584) | 0.003414 / 0.011008 (-0.007594) | 0.063198 / 0.038508 (0.024690) | 0.056020 / 0.023109 (0.032911) | 0.393333 / 0.275898 (0.117435) | 0.421166 / 0.323480 (0.097686) | 0.004360 / 0.007986 (-0.003626) | 0.002860 / 0.004328 (-0.001469) | 0.062712 / 0.004250 (0.058461) | 0.045363 / 0.037052 (0.008311) | 0.413156 / 0.258489 (0.154667) | 0.422897 / 0.293841 (0.129056) | 0.027092 / 0.128546 (-0.101455) | 0.007960 / 0.075646 (-0.067687) | 0.068531 / 0.419271 (-0.350740) | 0.041402 / 0.043533 (-0.002131) | 0.377008 / 0.255139 (0.121869) | 0.409142 / 0.283200 (0.125942) | 0.019707 / 0.141683 (-0.121976) | 1.440556 / 1.452155 (-0.011599) | 1.487403 / 1.492716 (-0.005314) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224355 / 0.018006 (0.206349) | 0.397855 / 0.000490 (0.397365) | 0.000363 / 0.000200 (0.000163) | 0.000056 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025107 / 0.037411 (-0.012305) | 0.076404 / 0.014526 (0.061878) | 0.083194 / 0.176557 (-0.093362) | 0.135347 / 0.737135 (-0.601789) | 0.084786 / 0.296338 (-0.211553) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.433024 / 0.215209 (0.217815) | 4.323879 / 2.077655 (2.246224) | 2.263004 / 1.504120 (0.758884) | 2.072053 / 1.541195 (0.530858) | 2.113916 / 1.468490 (0.645426) | 0.502742 / 4.584777 (-4.082035) | 3.001716 / 3.745712 (-0.743996) | 2.777960 / 5.269862 (-2.491901) | 1.826514 / 4.565676 (-2.739162) | 0.057735 / 0.424275 (-0.366540) | 0.006671 / 0.007607 (-0.000937) | 0.503347 / 0.226044 (0.277303) | 5.037308 / 2.268929 (2.768380) | 2.679146 / 55.444624 (-52.765478) | 2.410899 / 6.876477 (-4.465577) | 2.467341 / 2.142072 (0.325268) | 0.589824 / 4.805227 (-4.215403) | 0.125529 / 6.500664 (-6.375135) | 0.061950 / 0.075469 (-0.013520) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.304128 / 1.841788 (-0.537659) | 17.950215 / 8.074308 (9.875907) | 13.673768 / 10.191392 (3.482376) | 0.129863 / 0.680424 (-0.550561) | 0.016720 / 0.534201 (-0.517481) | 0.329795 / 0.579283 (-0.249488) | 0.339057 / 0.434364 (-0.095307) | 0.382279 / 0.540337 (-0.158059) | 0.507337 / 1.386936 (-0.879599) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ef05b6f99a2b19990c6f5e4e28d95d28781570db \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006199 / 0.011353 (-0.005154) | 0.003749 / 0.011008 (-0.007259) | 0.080600 / 0.038508 (0.042092) | 0.061017 / 0.023109 (0.037908) | 0.319966 / 0.275898 (0.044067) | 0.354937 / 0.323480 (0.031457) | 0.004854 / 0.007986 (-0.003131) | 0.002996 / 0.004328 (-0.001333) | 0.063100 / 0.004250 (0.058849) | 0.050063 / 0.037052 (0.013011) | 0.316744 / 0.258489 (0.058255) | 0.358001 / 0.293841 (0.064160) | 0.027503 / 0.128546 (-0.101043) | 0.007876 / 0.075646 (-0.067771) | 0.262211 / 0.419271 (-0.157060) | 0.045717 / 0.043533 (0.002184) | 0.317188 / 0.255139 (0.062049) | 0.342404 / 0.283200 (0.059205) | 0.020194 / 0.141683 (-0.121489) | 1.498672 / 1.452155 (0.046517) | 1.545479 / 1.492716 (0.052762) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.210985 / 0.018006 (0.192979) | 0.433592 / 0.000490 (0.433102) | 0.002864 / 0.000200 (0.002664) | 0.000079 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023463 / 0.037411 (-0.013948) | 0.073375 / 0.014526 (0.058850) | 0.083082 / 0.176557 (-0.093475) | 0.142583 / 0.737135 (-0.594552) | 0.084267 / 0.296338 (-0.212071) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.412890 / 0.215209 (0.197681) | 4.131421 / 2.077655 (2.053766) | 1.969164 / 1.504120 (0.465044) | 1.772379 / 1.541195 (0.231185) | 1.834154 / 1.468490 (0.365664) | 0.496290 / 4.584777 (-4.088487) | 3.056504 / 3.745712 (-0.689208) | 3.400962 / 5.269862 (-1.868900) | 2.120575 / 4.565676 (-2.445101) | 0.056932 / 0.424275 (-0.367343) | 0.006412 / 0.007607 (-0.001195) | 0.484521 / 0.226044 (0.258477) | 4.817474 / 2.268929 (2.548545) | 2.464075 / 55.444624 (-52.980549) | 2.085056 / 6.876477 (-4.791421) | 2.324516 / 2.142072 (0.182444) | 0.592013 / 4.805227 (-4.213214) | 0.132232 / 6.500664 (-6.368432) | 0.062825 / 0.075469 (-0.012645) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.228080 / 1.841788 (-0.613708) | 18.555385 / 8.074308 (10.481077) | 13.939565 / 10.191392 (3.748173) | 0.145979 / 0.680424 (-0.534445) | 0.016823 / 0.534201 (-0.517377) | 0.330569 / 0.579283 (-0.248714) | 0.358094 / 0.434364 (-0.076270) | 0.384642 / 0.540337 (-0.155696) | 0.518347 / 1.386936 (-0.868589) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006198 / 0.011353 (-0.005155) | 0.003670 / 0.011008 (-0.007338) | 0.062502 / 0.038508 (0.023994) | 0.064339 / 0.023109 (0.041229) | 0.428414 / 0.275898 (0.152516) | 0.463899 / 0.323480 (0.140420) | 0.005524 / 0.007986 (-0.002462) | 0.002915 / 0.004328 (-0.001413) | 0.062521 / 0.004250 (0.058270) | 0.051182 / 0.037052 (0.014130) | 0.431144 / 0.258489 (0.172655) | 0.469465 / 0.293841 (0.175624) | 0.027463 / 0.128546 (-0.101083) | 0.007974 / 0.075646 (-0.067673) | 0.068029 / 0.419271 (-0.351242) | 0.042123 / 0.043533 (-0.001409) | 0.428667 / 0.255139 (0.173528) | 0.455917 / 0.283200 (0.172717) | 0.023264 / 0.141683 (-0.118419) | 1.426986 / 1.452155 (-0.025168) | 1.500049 / 1.492716 (0.007332) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207264 / 0.018006 (0.189258) | 0.440738 / 0.000490 (0.440248) | 0.000802 / 0.000200 (0.000602) | 0.000062 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026245 / 0.037411 (-0.011166) | 0.078749 / 0.014526 (0.064223) | 0.087873 / 0.176557 (-0.088684) | 0.141518 / 0.737135 (-0.595617) | 0.089811 / 0.296338 (-0.206527) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418955 / 0.215209 (0.203746) | 4.177881 / 2.077655 (2.100226) | 2.162678 / 1.504120 (0.658558) | 1.998969 / 1.541195 (0.457775) | 2.066720 / 1.468490 (0.598230) | 0.496850 / 4.584777 (-4.087927) | 3.041179 / 3.745712 (-0.704534) | 4.126039 / 5.269862 (-1.143823) | 2.740507 / 4.565676 (-1.825169) | 0.058025 / 0.424275 (-0.366250) | 0.006846 / 0.007607 (-0.000761) | 0.493281 / 0.226044 (0.267237) | 4.930196 / 2.268929 (2.661268) | 2.685152 / 55.444624 (-52.759472) | 2.378247 / 6.876477 (-4.498230) | 2.469103 / 2.142072 (0.327031) | 0.585346 / 4.805227 (-4.219882) | 0.126099 / 6.500664 (-6.374565) | 0.062946 / 0.075469 (-0.012523) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.313892 / 1.841788 (-0.527896) | 19.177117 / 8.074308 (11.102809) | 14.081321 / 10.191392 (3.889929) | 0.133948 / 0.680424 (-0.546476) | 0.017128 / 0.534201 (-0.517073) | 0.332241 / 0.579283 (-0.247042) | 0.373218 / 0.434364 (-0.061145) | 0.395308 / 0.540337 (-0.145030) | 0.529883 / 1.386936 (-0.857053) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#16f7c7677942083436062b904b74643accb9bcac \"CML watermark\")\n" ]
2023-07-31T06:05:36Z
2023-07-31T06:33:00Z
2023-07-31T06:18:17Z
MEMBER
null
null
null
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6101/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6101/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6101.diff", "html_url": "https://github.com/huggingface/datasets/pull/6101", "merged_at": "2023-07-31T06:18:17Z", "patch_url": "https://github.com/huggingface/datasets/pull/6101.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6101" }
https://api.github.com/repos/huggingface/datasets/issues/5147
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5147/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5147/comments
https://api.github.com/repos/huggingface/datasets/issues/5147/events
https://github.com/huggingface/datasets/issues/5147
1,419,522,275
I_kwDODunzps5UnDDj
5,147
Allow ignoring kwargs inside fn_kwargs during dataset.map's fingerprinting
{ "avatar_url": "https://avatars.githubusercontent.com/u/8387736?v=4", "events_url": "https://api.github.com/users/falcaopetri/events{/privacy}", "followers_url": "https://api.github.com/users/falcaopetri/followers", "following_url": "https://api.github.com/users/falcaopetri/following{/other_user}", "gists_url": "https://api.github.com/users/falcaopetri/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/falcaopetri", "id": 8387736, "login": "falcaopetri", "node_id": "MDQ6VXNlcjgzODc3MzY=", "organizations_url": "https://api.github.com/users/falcaopetri/orgs", "received_events_url": "https://api.github.com/users/falcaopetri/received_events", "repos_url": "https://api.github.com/users/falcaopetri/repos", "site_admin": false, "starred_url": "https://api.github.com/users/falcaopetri/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/falcaopetri/subscriptions", "type": "User", "url": "https://api.github.com/users/falcaopetri", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "Hi ! In the `transformers` issue the object to not hash is a `Pool` - I think you can instantiate it inside your function instead of passing it as a parameter. It's good practice that your function and all its fn_kwargs are picklable, in case you want to parallelize `map` using `num_proc>1`\r\n\r\nFor the other case `def fn(example, verbose=False):` however, I agree it would be nice to let the user specify that \"verbose\" needs to be ignored.\r\n\r\nDo you think providing a decorator could help ? Maybe\r\n```python\r\n@datasets.hashing.register(ignore_kwargs=[\"verbose\"])\r\ndef func(example, verbose=False):\r\n ...\r\n```", "Hi @lhoestq! Thanks for your response.\r\n\r\nA `Pool` shouldn't be instantiated within the function, because there's a huge overhead in doing so. The main idea is that the same `Pool` should be used across all function calls. Parallel `map` is not helpful/desired in that specific scenario, because the heavy parallel computation is done by another lib (`pyctcdecode`, called within `transformer`'s model inference code).\r\n\r\nBut yes, it makes sense to be able to leverage parallel processing by just doing `num_proc>1` when possible.\r\n\r\nYour decorator suggestions seems like a pretty clean API to me. I didn't find a `datasets.hashing` module though. Would it be created for this specific purpose? Any downsides in just using `datasets.fingerprint`?\r\n\r\nAnd would `datasets.hashing.register` just add some metadata to `func` in your approach (so it could be inspected from `fingerprint_transform`)?\r\n\r\nAnd looking to the `datasets.Dataset` API, `.filter` would also benefited from this.", "> Would it be created for this specific purpose? Any downsides in just using datasets.fingerprint?\r\n\r\nThis can also go in datasets.fingerprint indeed - but maybe datasets.hashing tells more about what the register function does (i.e. register this function to have a custom hashing) ?\r\n\r\n> And would datasets.hashing.register just add some metadata to func in your approach (so it could be inspected from fingerprint_transform)?\r\n\r\nYup that's the idea :)\r\n\r\n> And looking to the datasets.Dataset API, .filter would also benefited from this.\r\n\r\nIndeed !\r\n\r\n-----\r\n\r\nIf you would like to contribute this you can assign yourself to this issue by posting #self-assign\r\nAnd of course if you have questions or if I can help, feel free to ping me !", "> This can also go in datasets.fingerprint indeed - but maybe datasets.hashing tells more about what the register function does (i.e. register this function to have a custom hashing) ?\r\n\r\nSure, it makes sense.\r\n\r\n---\r\n\r\nI don't plan to work on it right now, so I'll let it unassigned in case somebody wants to join. I'll get back at it as soon as possible though.\r\n" ]
2022-10-22T21:46:38Z
2022-11-01T22:19:07Z
null
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Feature request `dataset.map` accepts a `fn_kwargs` that is passed to `fn`. Currently, the whole `fn_kwargs` is used by `fingerprint_transform` to calculate the new fingerprint. I'd like to be able to inform `fingerprint_transform` which `fn_kwargs` shoud/shouldn't be taken into account during hashing. Of course, users should be aware to properly use this new feature, just like the internal usages of `fingerprint_transform` [does](https://github.com/huggingface/datasets/blob/2699593b33ee63d17aad2a2bfddedd38a8df57b8/src/datasets/arrow_dataset.py#L2700). ### Motivation This is originally motivated by https://github.com/huggingface/transformers/pull/18351#issuecomment-1263588680. Nonetheless, consider a more general processing function that accepts a kwarg that does not influence it's output: ```python def fn(example, verbose=False): ... ``` Then `dataset.map(fn, verbose=True)` would not benefit from dataset caching. I'm not sure if other methods in the `Dataset` API could benefit from this feature. ### Your contribution Based on `fingerprint_transform `'s `wrapper` function [here](https://github.com/huggingface/datasets/blob/c59cc34fcd2a369d27b77cc678017f5976a926a9/src/datasets/fingerprint.py#L443), it seems to me that it should be possible to make `.map`/`._map_single` accept something like `fn_use_fingerprint_kwargs`/`fn_ignore_fingerprint_kwargs` (probably another arg name). This would then be used by `fingerprint_transform.wrapper` to better/more flexibly hash the transformation. I could contribute with a PR if this feature and approach look good to you.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5147/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5147/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/6392
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6392/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6392/comments
https://api.github.com/repos/huggingface/datasets/issues/6392/events
https://github.com/huggingface/datasets/issues/6392
1,984,369,545
I_kwDODunzps52RxOJ
6,392
`push_to_hub` is not robust to hub closing connection
{ "avatar_url": "https://avatars.githubusercontent.com/u/577139?v=4", "events_url": "https://api.github.com/users/msis/events{/privacy}", "followers_url": "https://api.github.com/users/msis/followers", "following_url": "https://api.github.com/users/msis/following{/other_user}", "gists_url": "https://api.github.com/users/msis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/msis", "id": 577139, "login": "msis", "node_id": "MDQ6VXNlcjU3NzEzOQ==", "organizations_url": "https://api.github.com/users/msis/orgs", "received_events_url": "https://api.github.com/users/msis/received_events", "repos_url": "https://api.github.com/users/msis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/msis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/msis/subscriptions", "type": "User", "url": "https://api.github.com/users/msis", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi! We made some improvements to `push_to_hub` to make it more robust a couple of weeks ago but haven't published a release in the meantime, so it would help if you could install `datasets` from `main` (`pip install https://github.com/huggingface/datasets`) and let us know if this improved version of `push_to_hub` resolves the issue (in case the `ConnectionError` happens, re-running `push_to_hub` should be faster now).\r\n\r\nAlso, note that the previous implementation retries the upload, but sometimes this is not enough, so re-running the op is the only option.", "The update helped push more data.\r\nHowever it still crashed a little later:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py\", line 270, in hf_raise_for_status\r\n response.raise_for_status()\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/requests/models.py\", line 1021, in raise_for_status\r\n raise HTTPError(http_error_msg, response=self)\r\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: https://hf-hub-lfs-us-east-1.s3.us-east-1.amazonaws.com/repos/6c/33/6c33b3be1463a656e43c7a4f2d43c4a1cdae6e9d81fff87f69167ef25ccb1b88/5f53cb57cf2a52ca0d4c2166a69a6714c64fcdbb7cb8936dfa5b11ac60058e5f?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA2JU7TKAQFN2FTF47%2F20231110%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20231110T011254Z&X-Amz-Expires=86400&X-Amz-Signature=74e3e33c09ac4e7c6ac887aaee8d489f068869abbe1ee6d58a910fb18d0601d4&X-Amz-SignedHeaders=host&partNumber=13&uploadId=kQwunNkunfmT9D8GulQu_ufw1BTZtRA6wEUI4hnYOjytfdf.GKxDETgMr4wm8_0WNF2yGaNco_0h3JAGm4l9KV1N0nqr5XXyUCbs1ROmHP475fn9FIhc1umWQLEDc97V&x-id=UploadPart\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/_commit_api.py\", line 391, in _wrapped_lfs_upload\r\n lfs_upload(operation=operation, lfs_batch_action=batch_action, token=token)\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/lfs.py\", line 223, in lfs_upload\r\n _upload_multi_part(operation=operation, header=header, chunk_size=chunk_size, upload_url=upload_action[\"href\"])\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/lfs.py\", line 319, in _upload_multi_part\r\n else _upload_parts_iteratively(operation=operation, sorted_parts_urls=sorted_parts_urls, chunk_size=chunk_size)\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/lfs.py\", line 376, in _upload_parts_iteratively\r\n hf_raise_for_status(part_upload_res)\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py\", line 330, in hf_raise_for_status\r\n raise HfHubHTTPError(str(e), response=response) from e\r\nhuggingface_hub.utils._errors.HfHubHTTPError: 500 Server Error: Internal Server Error for url: https://hf-hub-lfs-us-east-1.s3.us-east-1.amazonaws.com/repos/6c/33/6c33b3be1463a656e43c7a4f2d43c4a1cdae6e9d81fff87f69167ef25ccb1b88/5f53cb57cf2a52ca0d4c2166a69a6714c64fcdbb7cb8936dfa5b11ac60058e5f?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA2JU7TKAQFN2FTF47%2F20231110%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20231110T011254Z&X-Amz-Expires=86400&X-Amz-Signature=74e3e33c09ac4e7c6ac887aaee8d489f068869abbe1ee6d58a910fb18d0601d4&X-Amz-SignedHeaders=host&partNumber=13&uploadId=kQwunNkunfmT9D8GulQu_ufw1BTZtRA6wEUI4hnYOjytfdf.GKxDETgMr4wm8_0WNF2yGaNco_0h3JAGm4l9KV1N0nqr5XXyUCbs1ROmHP475fn9FIhc1umWQLEDc97V&x-id=UploadPart\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"convert_to_hf.py\", line 121, in <module>\r\n main()\r\n File \"convert_to_hf.py\", line 109, in main\r\n audio_dataset.push_to_hub(\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/datasets/dataset_dict.py\", line 1699, in push_to_hub\r\n split_additions, uploaded_size, dataset_nbytes = self[split]._push_parquet_shards_to_hub(\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 5215, in _push_parquet_shards_to_hub\r\n _retry(\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/datasets/utils/file_utils.py\", line 290, in _retry\r\n return func(*func_args, **func_kwargs)\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/hf_api.py\", line 3665, in preupload_lfs_files\r\n _upload_lfs_files(\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py\", line 118, in _inner_fn\r\n return fn(*args, **kwargs)\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/_commit_api.py\", line 401, in _upload_lfs_files\r\n _wrapped_lfs_upload(filtered_actions[0])\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/_commit_api.py\", line 393, in _wrapped_lfs_upload\r\n raise RuntimeError(f\"Error while uploading '{operation.path_in_repo}' to the Hub.\") from exc\r\nRuntimeError: Error while uploading 'batch_20/train-00206-of-00261.parquet' to the Hub.\r\n```", "I think the previous implementation was actually better: it pushes to the hub every shard. So if it fails, as long as the shards have the same checksum, it will skip the ones that have been pushed.\r\n\r\nThe implementation in `main` pushes commits at the end, so when it fails, there are no commits and therefore restarts from the beginning every time.\r\n\r\nBelow is the another error log from another run with `main`. I've reverting back to the current release as it does the job for me.\r\n\r\n```\r\nUploading the dataset shards: 86%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 224/261 [21:46<03:35, 5.83s/it]s]\r\nTraceback (most recent call last):\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py\", line 270, in hf_raise_for_status\r\n response.raise_for_status()\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/requests/models.py\", line 1021, in raise_for_status\r\n raise HTTPError(http_error_msg, response=self)\r\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: https://hf-hub-lfs-us-east-1.s3.us-east-1.amazonaws.com/repos/6c/33/6c33b3be1463a656e43c7a4f2d43c4a1cdae6e9d81fff87f69167ef25ccb1b88/97e68d7a5d4a747ffaa249fc09798e961d621fe4170599e6100197f7733f321d?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA2JU7TKAQFN2FTF47%2F20231110%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20231110T145155Z&X-Amz-Expires=86400&X-Amz-Signature=5341e4b34dc325737f92dc9005c4a31e4d3f9a3d3d853b267e01915260acf629&X-Amz-SignedHeaders=host&partNumber=27&uploadId=NRD0izEWv7MPtC2bYrm5VJ4XgIbHctKNguR7zS1UhGOOrXwBJvigrOywBvQBnS9sxiy0J0ma9sNog8S13nIdTdE9p60MIITTstUFeKvLHSxpU.a527QED1JVYzJ.9xA0&x-id=UploadPart\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/_commit_api.py\", line 391, in _wrapped_lfs_upload\r\n lfs_upload(operation=operation, lfs_batch_action=batch_action, token=token)\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/lfs.py\", line 223, in lfs_upload\r\n _upload_multi_part(operation=operation, header=header, chunk_size=chunk_size, upload_url=upload_action[\"href\"])\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/lfs.py\", line 319, in _upload_multi_part\r\n else _upload_parts_iteratively(operation=operation, sorted_parts_urls=sorted_parts_urls, chunk_size=chunk_size)\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/lfs.py\", line 376, in _upload_parts_iteratively\r\n hf_raise_for_status(part_upload_res)\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py\", line 330, in hf_raise_for_status\r\n raise HfHubHTTPError(str(e), response=response) from e\r\nhuggingface_hub.utils._errors.HfHubHTTPError: 500 Server Error: Internal Server Error for url: https://hf-hub-lfs-us-east-1.s3.us-east-1.amazonaws.com/repos/6c/33/6c33b3be1463a656e43c7a4f2d43c4a1cdae6e9d81fff87f69167ef25ccb1b88/97e68d7a5d4a747ffaa249fc09798e961d621fe4170599e6100197f7733f321d?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA2JU7TKAQFN2FTF47%2F20231110%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20231110T145155Z&X-Amz-Expires=86400&X-Amz-Signature=5341e4b34dc325737f92dc9005c4a31e4d3f9a3d3d853b267e01915260acf629&X-Amz-SignedHeaders=host&partNumber=27&uploadId=NRD0izEWv7MPtC2bYrm5VJ4XgIbHctKNguR7zS1UhGOOrXwBJvigrOywBvQBnS9sxiy0J0ma9sNog8S13nIdTdE9p60MIITTstUFeKvLHSxpU.a527QED1JVYzJ.9xA0&x-id=UploadPart\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"convert_to_hf.py\", line 121, in <module>\r\n main()\r\n File \"convert_to_hf.py\", line 109, in main\r\n audio_dataset.push_to_hub(\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/datasets/dataset_dict.py\", line 1699, in push_to_hub\r\n p, glob_pattern_to_regex(PUSH_TO_HUB_WITHOUT_METADATA_CONFIGS_SPLIT_PATTERN_SHARDED)\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 5215, in _push_parquet_shards_to_hub\r\n token = token if token is not None else HfFolder.get_token()\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/datasets/utils/file_utils.py\", line 290, in _retry\r\n return func(*func_args, **func_kwargs)\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/hf_api.py\", line 3665, in preupload_lfs_files\r\n _upload_lfs_files(\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py\", line 118, in _inner_fn\r\n return fn(*args, **kwargs)\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/_commit_api.py\", line 401, in _upload_lfs_files\r\n _wrapped_lfs_upload(filtered_actions[0])\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/_commit_api.py\", line 393, in _wrapped_lfs_upload\r\n raise RuntimeError(f\"Error while uploading '{operation.path_in_repo}' to the Hub.\") from exc\r\nRuntimeError: Error while uploading 'batch_20/train-00224-of-00261.parquet' to the Hub.\r\n```", "There's a new error from the hub now:\r\n```\r\nPushing dataset shards to the dataset hub: 49%|β–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 128/261 [11:38<12:05, 5.45s/it]\r\nTraceback (most recent call last):\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py\", line 270, in hf_raise_for_status\r\n response.raise_for_status()\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/requests/models.py\", line 1021, in raise_for_status\r\n raise HTTPError(http_error_msg, response=self)\r\nrequests.exceptions.HTTPError: 429 Client Error: Too Many Requests for url: https://huggingface.co/api/datasets/tarteel-ai/tawseem/commit/main\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"convert_to_hf.py\", line 121, in <module>\r\n main()\r\n File \"convert_to_hf.py\", line 109, in main\r\n audio_dataset.push_to_hub(\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/datasets/dataset_dict.py\", line 1641, in push_to_hub\r\n repo_id, split, uploaded_size, dataset_nbytes, _, _ = self[split]._push_parquet_shards_to_hub(\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 5308, in _push_parquet_shards_to_hub\r\n _retry(\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/datasets/utils/file_utils.py\", line 293, in _retry\r\n raise err\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/datasets/utils/file_utils.py\", line 290, in _retry\r\n return func(*func_args, **func_kwargs)\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py\", line 118, in _inner_fn\r\n return fn(*args, **kwargs)\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/hf_api.py\", line 1045, in _inner\r\n return fn(self, *args, **kwargs)\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/hf_api.py\", line 3850, in upload_file\r\n commit_info = self.create_commit(\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py\", line 118, in _inner_fn\r\n return fn(*args, **kwargs)\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/hf_api.py\", line 1045, in _inner\r\n return fn(self, *args, **kwargs)\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/hf_api.py\", line 3237, in create_commit\r\n hf_raise_for_status(commit_resp, endpoint_name=\"commit\")\r\n File \"/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py\", line 330, in hf_raise_for_status\r\n raise HfHubHTTPError(str(e), response=response) from e\r\nhuggingface_hub.utils._errors.HfHubHTTPError: 429 Client Error: Too Many Requests for url: https://huggingface.co/api/datasets/tarteel-ai/tawseem/commit/main (Request ID: Root=1-654e48e6-598511b14413bb293fa67084;783522b4-66f9-4f8a-8a74-2accf7cabd17)\r\n\r\nYou have exceeded our hourly quotas for action: commit. We invite you to retry later.\r\n```\r\n\r\nAt least this is more explicit from the server side.", "> think the previous implementation was actually better: it pushes to the hub every shard. So if it fails, as long as the shards have the same checksum, it will skip the ones that have been pushed.\r\n>\r\n>The implementation in main pushes commits at the end, so when it fails, there are no commits and therefore restarts from the beginning every time.\r\n>\r\n>Below is the another error log from another run with main. I've reverting back to the current release as it does the job for me.\r\n\r\nThe `preupload` step is instant for the already uploaded shards, so only the Parquet conversion is repeated without uploading the actual Parquet data (only to check the SHAs). The previous implementation manually checks the Parquet shard's fingerprint to resume uploading, so the current implementation is cleaner.\r\n\r\n> You have exceeded our hourly quotas for action: commit. We invite you to retry later.\r\n\r\nThis is the problem with the previous implementation. If the number of shards is large, it creates too many commits for the Hub in a short period.", "But I agree that the `500 Server Error` returned by the Hub is annoying. Earlier today, I also got it on a small 5GB dataset (with 500 MB shards).\r\n\r\n@Wauplin @julien-c Is there something we can do about this?", "@mariosasko can't do much if AWS raises a HTTP 500 unfortunately (we are simply pushing data to a S3 bucket).\r\nWhat we can do is to add a retry mechanism in the multi-part upload logic here: https://github.com/huggingface/huggingface_hub/blob/c972cba1fecb456a7b3325cdd1fdbcc425f21f94/src/huggingface_hub/lfs.py#L370 :confused: ", "@Wauplin That code already retries the request using `http_backoff`, no?", "> That code already retries the request using http_backoff, no?\r\n\r\nCurrently only on HTTP 503 by default. We should add 500 as well (and hope it is a transient error from AWS)", "Opened a PR to retry in case S3 raises HTTP 500. Will also retry on any `ConnectionError` (connection reset by peer, connection lost,...). Hopefully this should make the upload process more robust to transient errors.", "I still get the same error, using `push_to_hub`. Using `git lfs` and pushing the files solved it for me.", "@BEpresent the fix has not been released yet. You can expect a release of `huggingface_hub` (with this fix) today or tomorrow :)" ]
2023-11-08T20:44:53Z
2023-12-20T07:28:24Z
2023-12-01T17:51:34Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug Like to #6172, `push_to_hub` will crash if Hub resets the connection and raise the following error: ``` Pushing dataset shards to the dataset hub: 32%|β–ˆβ–ˆβ–ˆβ– | 54/171 [06:38<14:23, 7.38s/it] Traceback (most recent call last): File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/urllib3/connectionpool.py", line 715, in urlopen httplib_response = self._make_request( File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/urllib3/connectionpool.py", line 467, in _make_request six.raise_from(e, None) File "<string>", line 3, in raise_from File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/urllib3/connectionpool.py", line 462, in _make_request httplib_response = conn.getresponse() File "/usr/lib/python3.8/http/client.py", line 1348, in getresponse response.begin() File "/usr/lib/python3.8/http/client.py", line 316, in begin version, status, reason = self._read_status() File "/usr/lib/python3.8/http/client.py", line 285, in _read_status raise RemoteDisconnected("Remote end closed connection without" http.client.RemoteDisconnected: Remote end closed connection without response During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/requests/adapters.py", line 486, in send resp = conn.urlopen( File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/urllib3/connectionpool.py", line 799, in urlopen retries = retries.increment( File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/urllib3/util/retry.py", line 550, in increment raise six.reraise(type(error), error, _stacktrace) File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/urllib3/packages/six.py", line 769, in reraise raise value.with_traceback(tb) File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/urllib3/connectionpool.py", line 715, in urlopen httplib_response = self._make_request( File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/urllib3/connectionpool.py", line 467, in _make_request six.raise_from(e, None) File "<string>", line 3, in raise_from File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/urllib3/connectionpool.py", line 462, in _make_request httplib_response = conn.getresponse() File "/usr/lib/python3.8/http/client.py", line 1348, in getresponse response.begin() File "/usr/lib/python3.8/http/client.py", line 316, in begin version, status, reason = self._read_status() File "/usr/lib/python3.8/http/client.py", line 285, in _read_status raise RemoteDisconnected("Remote end closed connection without" urllib3.exceptions.ProtocolError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/_commit_api.py", line 383, in _wrapped_lfs_upload lfs_upload(operation=operation, lfs_batch_action=batch_action, token=token) File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/lfs.py", line 223, in lfs_upload _upload_multi_part(operation=operation, header=header, chunk_size=chunk_size, upload_url=upload_action["href"]) File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/lfs.py", line 319, in _upload_multi_part else _upload_parts_iteratively(operation=operation, sorted_parts_urls=sorted_parts_urls, chunk_size=chunk_size) File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/lfs.py", line 375, in _upload_parts_iteratively part_upload_res = http_backoff("PUT", part_upload_url, data=fileobj_slice) File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/utils/_http.py", line 258, in http_backoff response = session.request(method=method, url=url, **kwargs) File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/utils/_http.py", line 63, in send return super().send(request, *args, **kwargs) File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/requests/adapters.py", line 501, in send raise ConnectionError(err, request=request) requests.exceptions.ConnectionError: (ProtocolError('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')), '(Request ID: 2bab8c06-b701-4266-aead-fe2e0dc0e3ed)') The above exception was the direct cause of the following exception: Traceback (most recent call last): File "convert_to_hf.py", line 116, in <module> main() File "convert_to_hf.py", line 108, in main audio_dataset.push_to_hub( File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/datasets/dataset_dict.py", line 1641, in push_to_hub repo_id, split, uploaded_size, dataset_nbytes, _, _ = self[split]._push_parquet_shards_to_hub( File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 5308, in _push_parquet_shards_to_hub _retry( File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 290, in _retry return func(*func_args, **func_kwargs) File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn return fn(*args, **kwargs) File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 828, in _inner return fn(self, *args, **kwargs) File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 3221, in upload_file commit_info = self.create_commit( File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn return fn(*args, **kwargs) File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 828, in _inner return fn(self, *args, **kwargs) File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 2695, in create_commit upload_lfs_files( File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn return fn(*args, **kwargs) File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/_commit_api.py", line 393, in upload_lfs_files _wrapped_lfs_upload(filtered_actions[0]) File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/_commit_api.py", line 385, in _wrapped_lfs_upload raise RuntimeError(f"Error while uploading '{operation.path_in_repo}' to the Hub.") from exc RuntimeError: Error while uploading 'batch_19/train-00054-of-00171-932beb4082c034bf.parquet' to the Hub. ``` The function should retry if the operations fails, or at least offer a way to recover after such a failure. Right now, calling the function again will start sending all the parquets files leading to duplicates in the repository, with no guarantee that it will actually be pushed. Previously, it would crash with an error 400 #4677 . ### Steps to reproduce the bug Any large dataset pushed the hub: ```py audio_dataset.push_to_hub( repo_id="org/dataset", ) ``` ### Expected behavior `push_to_hub` should have an option for max retries or resume. ### Environment info - `datasets` version: 2.14.6 - Platform: Linux-5.15.0-1044-aws-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.16.4 - PyArrow version: 13.0.0 - Pandas version: 2.0.3
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6392/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6392/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/5551
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5551/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5551/comments
https://api.github.com/repos/huggingface/datasets/issues/5551/events
https://github.com/huggingface/datasets/pull/5551
1,592,140,836
PR_kwDODunzps5KXCof
5,551
Suggest scikit-learn instead of sklearn
{ "avatar_url": "https://avatars.githubusercontent.com/u/74963545?v=4", "events_url": "https://api.github.com/users/osbm/events{/privacy}", "followers_url": "https://api.github.com/users/osbm/followers", "following_url": "https://api.github.com/users/osbm/following{/other_user}", "gists_url": "https://api.github.com/users/osbm/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/osbm", "id": 74963545, "login": "osbm", "node_id": "MDQ6VXNlcjc0OTYzNTQ1", "organizations_url": "https://api.github.com/users/osbm/orgs", "received_events_url": "https://api.github.com/users/osbm/received_events", "repos_url": "https://api.github.com/users/osbm/repos", "site_admin": false, "starred_url": "https://api.github.com/users/osbm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/osbm/subscriptions", "type": "User", "url": "https://api.github.com/users/osbm", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "good catch!", "_The documentation is not available anymore as the PR was closed or merged._", "The test fail is unrelated to this PR and fixed on `main` - merging :)", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008942 / 0.011353 (-0.002411) | 0.004617 / 0.011008 (-0.006391) | 0.101310 / 0.038508 (0.062802) | 0.030997 / 0.023109 (0.007888) | 0.306292 / 0.275898 (0.030394) | 0.370533 / 0.323480 (0.047053) | 0.007318 / 0.007986 (-0.000667) | 0.003473 / 0.004328 (-0.000856) | 0.078557 / 0.004250 (0.074307) | 0.036312 / 0.037052 (-0.000740) | 0.308993 / 0.258489 (0.050504) | 0.344411 / 0.293841 (0.050570) | 0.034384 / 0.128546 (-0.094162) | 0.011631 / 0.075646 (-0.064016) | 0.323948 / 0.419271 (-0.095324) | 0.041176 / 0.043533 (-0.002357) | 0.302512 / 0.255139 (0.047373) | 0.322439 / 0.283200 (0.039239) | 0.088955 / 0.141683 (-0.052728) | 1.534918 / 1.452155 (0.082763) | 1.555803 / 1.492716 (0.063087) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.195639 / 0.018006 (0.177633) | 0.423068 / 0.000490 (0.422579) | 0.004101 / 0.000200 (0.003901) | 0.000079 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023691 / 0.037411 (-0.013721) | 0.100536 / 0.014526 (0.086011) | 0.108399 / 0.176557 (-0.068157) | 0.143515 / 0.737135 (-0.593620) | 0.111886 / 0.296338 (-0.184452) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417519 / 0.215209 (0.202310) | 4.180463 / 2.077655 (2.102808) | 1.862511 / 1.504120 (0.358391) | 1.658724 / 1.541195 (0.117529) | 1.735847 / 1.468490 (0.267357) | 0.688257 / 4.584777 (-3.896520) | 3.447976 / 3.745712 (-0.297737) | 1.877939 / 5.269862 (-3.391922) | 1.157385 / 4.565676 (-3.408292) | 0.081418 / 0.424275 (-0.342857) | 0.012395 / 0.007607 (0.004788) | 0.518935 / 0.226044 (0.292891) | 5.220355 / 2.268929 (2.951427) | 2.308355 / 55.444624 (-53.136269) | 1.960026 / 6.876477 (-4.916450) | 2.013179 / 2.142072 (-0.128893) | 0.802850 / 4.805227 (-4.002377) | 0.146941 / 6.500664 (-6.353723) | 0.064080 / 0.075469 (-0.011389) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.284443 / 1.841788 (-0.557344) | 13.903755 / 8.074308 (5.829447) | 14.467101 / 10.191392 (4.275709) | 0.156813 / 0.680424 (-0.523611) | 0.028583 / 0.534201 (-0.505618) | 0.406349 / 0.579283 (-0.172934) | 0.413178 / 0.434364 (-0.021186) | 0.491283 / 0.540337 (-0.049055) | 0.571171 / 1.386936 (-0.815765) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006868 / 0.011353 (-0.004484) | 0.004593 / 0.011008 (-0.006416) | 0.077574 / 0.038508 (0.039066) | 0.027703 / 0.023109 (0.004593) | 0.342096 / 0.275898 (0.066198) | 0.378500 / 0.323480 (0.055020) | 0.005785 / 0.007986 (-0.002201) | 0.003342 / 0.004328 (-0.000986) | 0.076105 / 0.004250 (0.071855) | 0.040369 / 0.037052 (0.003317) | 0.343611 / 0.258489 (0.085122) | 0.391859 / 0.293841 (0.098018) | 0.032675 / 0.128546 (-0.095871) | 0.011623 / 0.075646 (-0.064023) | 0.086623 / 0.419271 (-0.332648) | 0.051955 / 0.043533 (0.008423) | 0.343425 / 0.255139 (0.088286) | 0.368887 / 0.283200 (0.085688) | 0.097117 / 0.141683 (-0.044566) | 1.499546 / 1.452155 (0.047391) | 1.593100 / 1.492716 (0.100383) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.193568 / 0.018006 (0.175562) | 0.409211 / 0.000490 (0.408722) | 0.003797 / 0.000200 (0.003597) | 0.000083 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024982 / 0.037411 (-0.012430) | 0.101367 / 0.014526 (0.086841) | 0.108546 / 0.176557 (-0.068010) | 0.144402 / 0.737135 (-0.592733) | 0.112233 / 0.296338 (-0.184105) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.432820 / 0.215209 (0.217611) | 4.341045 / 2.077655 (2.263391) | 2.058326 / 1.504120 (0.554207) | 1.853913 / 1.541195 (0.312718) | 1.942436 / 1.468490 (0.473946) | 0.699130 / 4.584777 (-3.885647) | 3.392879 / 3.745712 (-0.352833) | 1.908277 / 5.269862 (-3.361585) | 1.177998 / 4.565676 (-3.387678) | 0.082700 / 0.424275 (-0.341576) | 0.012505 / 0.007607 (0.004898) | 0.526286 / 0.226044 (0.300242) | 5.279599 / 2.268929 (3.010670) | 2.505771 / 55.444624 (-52.938854) | 2.158460 / 6.876477 (-4.718016) | 2.211437 / 2.142072 (0.069365) | 0.802065 / 4.805227 (-4.003163) | 0.150766 / 6.500664 (-6.349898) | 0.067639 / 0.075469 (-0.007830) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.286595 / 1.841788 (-0.555192) | 13.961894 / 8.074308 (5.887586) | 14.021865 / 10.191392 (3.830473) | 0.164590 / 0.680424 (-0.515834) | 0.016909 / 0.534201 (-0.517292) | 0.392215 / 0.579283 (-0.187069) | 0.408080 / 0.434364 (-0.026284) | 0.488247 / 0.540337 (-0.052090) | 0.575524 / 1.386936 (-0.811412) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#699b0293876015457bfce40f7245d346c34c7717 \"CML watermark\")\n" ]
2023-02-20T16:16:57Z
2023-02-21T13:27:57Z
2023-02-21T13:21:07Z
CONTRIBUTOR
null
null
null
This is kinda unimportant fix but, the suggested `pip install sklearn` does not work. The current error message if sklearn is not installed: ``` ImportError: To be able to use [dataset name], you need to install the following dependency: sklearn. Please install it using 'pip install sklearn' for instance. ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5551/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5551/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5551.diff", "html_url": "https://github.com/huggingface/datasets/pull/5551", "merged_at": "2023-02-21T13:21:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/5551.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5551" }
https://api.github.com/repos/huggingface/datasets/issues/6184
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6184/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6184/comments
https://api.github.com/repos/huggingface/datasets/issues/6184/events
https://github.com/huggingface/datasets/issues/6184
1,867,766,143
I_kwDODunzps5vU9l_
6,184
Map cache does not detect function changes in another module
{ "avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4", "events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}", "followers_url": "https://api.github.com/users/jonathanasdf/followers", "following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}", "gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jonathanasdf", "id": 511073, "login": "jonathanasdf", "node_id": "MDQ6VXNlcjUxMTA3Mw==", "organizations_url": "https://api.github.com/users/jonathanasdf/orgs", "received_events_url": "https://api.github.com/users/jonathanasdf/received_events", "repos_url": "https://api.github.com/users/jonathanasdf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions", "type": "User", "url": "https://api.github.com/users/jonathanasdf", "user_view_type": "public" }
[ { "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists", "id": 1935892865, "name": "duplicate", "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate" } ]
closed
false
null
[]
null
[ "This issue is a duplicate of https://github.com/huggingface/datasets/issues/3297. This is a limitation of `dill`, a package we use for caching (non-`__main__` module objects are serialized by reference). You can find more info about it here: https://github.com/uqfoundation/dill/issues/424.\r\n\r\nIn your case, moving \r\n```\r\ndata = datasets.load_dataset('json', data_files=['/tmp/test.json'], split='train')\r\ndata = data.map(transform)\r\n``` \r\nto `test.py` and setting `transform.__module__ = None` at the end of `dataset.py` should fix the issue.", "I understand this may be a limitation of an upstream tool, but for a user for datasets this is very annoying, as when you have dozens of different datasets with different preprocessing functions you can't really move them all into the same file. It may be worth seeing if there is a way to specialize the dependency (eg. subclass it) and enforce behaviors that makes sense for your product.\r\n\r\nI was able to work around this for now by setting `__module__ = None`. If such workarounds are required for now it may be better to document it somewhere than a single obscure issue from a long time ago.\r\n\r\nAs this is a duplicate issue I'm closing it.\r\n\r\nI have another issue with the cache https://github.com/huggingface/datasets/issues/6179 can you take a look?" ]
2023-08-25T22:59:14Z
2023-08-29T20:57:07Z
2023-08-29T20:56:49Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
```python # dataset.py import os import datasets if not os.path.exists('/tmp/test.json'): with open('/tmp/test.json', 'w') as file: file.write('[{"text": "hello"}]') def transform(example): text = example['text'] # text += ' world' return {'text': text} data = datasets.load_dataset('json', data_files=['/tmp/test.json'], split='train') data = data.map(transform) ``` ```python # test.py import dataset print(next(iter(dataset.data))) ``` Initialize cache ``` python3 test.py # {'text': 'hello'} ``` Edit dataset.py and uncomment the commented line, run again ``` python3 test.py # {'text': 'hello'} # expected: {'text': 'hello world'} ``` Clear cache and run again ``` rm -rf ~/.cache/huggingface/datasets/* python3 test.py # {'text': 'hello world'} ``` If instead the two files are combined, then changes to the function are detected correctly. But it's expected when working on any realistic codebase that things will be modularized into separate files.
{ "avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4", "events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}", "followers_url": "https://api.github.com/users/jonathanasdf/followers", "following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}", "gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jonathanasdf", "id": 511073, "login": "jonathanasdf", "node_id": "MDQ6VXNlcjUxMTA3Mw==", "organizations_url": "https://api.github.com/users/jonathanasdf/orgs", "received_events_url": "https://api.github.com/users/jonathanasdf/received_events", "repos_url": "https://api.github.com/users/jonathanasdf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions", "type": "User", "url": "https://api.github.com/users/jonathanasdf", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6184/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6184/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/6231
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6231/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6231/comments
https://api.github.com/repos/huggingface/datasets/issues/6231/events
https://github.com/huggingface/datasets/pull/6231
1,890,863,249
PR_kwDODunzps5aCr8_
6,231
Overwrite legacy default config name in `dataset_infos.json` in packaged datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6231). All of your documentation changes will be reflected on that endpoint.", "realized that this pr is still not merged, @lhoestq maybe you can take a look at it? ", "I think https://github.com/huggingface/datasets/pull/6218 fixed the issue (a bit differently though)", "ah actually nope, let me check", "@lhoestq yeah the pr you're referencing doesn't fix the problem when two semantically analogous configs occur in datasets_info.json, i suggest to rewrite the legacy one if it exists during .push_to_hub", "Only the old versions of `datasets` use the JSON file over the README and they can only load one config so the name doesn't really matter.\r\n\r\nThat's why I chose to load the info from the JSON no matter the name (no check to see if it's \"username--dataset_name\") in my previous PR.\r\n\r\nI think you can remove the old info without even checking the name. In this case maybe no need to update load.py ", "(also minor: not checking the name makes it more robust to dataset renaming)", "@lhoestq okay makes sense... so you think it's not a problem that in some cases we might end up with `dataset_infos.json` having two keys in it?", "> @lhoestq okay makes sense... so you think it's not a problem that in some cases we might end up with dataset_infos.json having two keys in it?\r\n\r\nIdeally they should have only one config no ? Since old versions of `datasets` simply load the first config in the JSON.\r\nWe can overwrite it with the new default one (and no matter the name of the outdated config in the JSON)\r\n\r\n" ]
2023-09-11T16:27:09Z
2023-09-26T11:19:36Z
null
CONTRIBUTOR
null
null
null
Currently if we push data as default config with `.push_to_hub` to a repo that has a legacy `dataset_infos.json` file containing a legacy default config name like `{username}--{dataset_name}`, new key `"default"` is added to `dataset_infos.json` along with the legacy one. I think the legacy one should be dropped in this case. Also, in `load.py` I suggest to check if a legacy config name is indeed a legacy config name because after this fix it might not be the case (this check was first introduced in https://github.com/huggingface/datasets/pull/6218)
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6231/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6231/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6231.diff", "html_url": "https://github.com/huggingface/datasets/pull/6231", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6231.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6231" }
https://api.github.com/repos/huggingface/datasets/issues/5605
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5605/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5605/comments
https://api.github.com/repos/huggingface/datasets/issues/5605/events
https://github.com/huggingface/datasets/pull/5605
1,608,865,460
PR_kwDODunzps5LPPf5
5,605
Update README logo
{ "avatar_url": "https://avatars.githubusercontent.com/u/3841370?v=4", "events_url": "https://api.github.com/users/gary149/events{/privacy}", "followers_url": "https://api.github.com/users/gary149/followers", "following_url": "https://api.github.com/users/gary149/following{/other_user}", "gists_url": "https://api.github.com/users/gary149/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gary149", "id": 3841370, "login": "gary149", "node_id": "MDQ6VXNlcjM4NDEzNzA=", "organizations_url": "https://api.github.com/users/gary149/orgs", "received_events_url": "https://api.github.com/users/gary149/received_events", "repos_url": "https://api.github.com/users/gary149/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gary149/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gary149/subscriptions", "type": "User", "url": "https://api.github.com/users/gary149", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Are you sure it's safe to remove? https://github.com/huggingface/datasets/pull/3866", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009520 / 0.011353 (-0.001833) | 0.005319 / 0.011008 (-0.005690) | 0.099372 / 0.038508 (0.060863) | 0.036173 / 0.023109 (0.013064) | 0.295752 / 0.275898 (0.019853) | 0.362882 / 0.323480 (0.039402) | 0.008442 / 0.007986 (0.000456) | 0.004225 / 0.004328 (-0.000103) | 0.076645 / 0.004250 (0.072394) | 0.044198 / 0.037052 (0.007146) | 0.311948 / 0.258489 (0.053459) | 0.342963 / 0.293841 (0.049122) | 0.038613 / 0.128546 (-0.089933) | 0.012127 / 0.075646 (-0.063519) | 0.334427 / 0.419271 (-0.084844) | 0.048309 / 0.043533 (0.004776) | 0.297046 / 0.255139 (0.041907) | 0.314562 / 0.283200 (0.031363) | 0.105797 / 0.141683 (-0.035886) | 1.460967 / 1.452155 (0.008812) | 1.500907 / 1.492716 (0.008190) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216185 / 0.018006 (0.198179) | 0.438924 / 0.000490 (0.438435) | 0.001210 / 0.000200 (0.001011) | 0.000081 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026193 / 0.037411 (-0.011219) | 0.105888 / 0.014526 (0.091363) | 0.115812 / 0.176557 (-0.060744) | 0.158748 / 0.737135 (-0.578387) | 0.121514 / 0.296338 (-0.174824) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.399837 / 0.215209 (0.184628) | 3.996992 / 2.077655 (1.919338) | 1.784964 / 1.504120 (0.280844) | 1.591078 / 1.541195 (0.049883) | 1.666424 / 1.468490 (0.197934) | 0.711450 / 4.584777 (-3.873327) | 3.787814 / 3.745712 (0.042102) | 2.056776 / 5.269862 (-3.213085) | 1.332163 / 4.565676 (-3.233514) | 0.085755 / 0.424275 (-0.338520) | 0.012033 / 0.007607 (0.004426) | 0.511500 / 0.226044 (0.285455) | 5.098999 / 2.268929 (2.830071) | 2.288261 / 55.444624 (-53.156364) | 1.947483 / 6.876477 (-4.928994) | 1.987838 / 2.142072 (-0.154234) | 0.852241 / 4.805227 (-3.952986) | 0.164781 / 6.500664 (-6.335883) | 0.061825 / 0.075469 (-0.013644) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.202253 / 1.841788 (-0.639534) | 14.632608 / 8.074308 (6.558300) | 13.331320 / 10.191392 (3.139928) | 0.157944 / 0.680424 (-0.522480) | 0.029284 / 0.534201 (-0.504917) | 0.446636 / 0.579283 (-0.132647) | 0.437009 / 0.434364 (0.002645) | 0.521883 / 0.540337 (-0.018455) | 0.606687 / 1.386936 (-0.780249) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007528 / 0.011353 (-0.003825) | 0.005274 / 0.011008 (-0.005734) | 0.073524 / 0.038508 (0.035016) | 0.033893 / 0.023109 (0.010784) | 0.335432 / 0.275898 (0.059534) | 0.379981 / 0.323480 (0.056501) | 0.005954 / 0.007986 (-0.002031) | 0.004126 / 0.004328 (-0.000203) | 0.072891 / 0.004250 (0.068641) | 0.046517 / 0.037052 (0.009465) | 0.337241 / 0.258489 (0.078752) | 0.385562 / 0.293841 (0.091721) | 0.036410 / 0.128546 (-0.092136) | 0.012246 / 0.075646 (-0.063401) | 0.085974 / 0.419271 (-0.333298) | 0.049665 / 0.043533 (0.006133) | 0.330919 / 0.255139 (0.075780) | 0.352041 / 0.283200 (0.068841) | 0.103751 / 0.141683 (-0.037931) | 1.468851 / 1.452155 (0.016696) | 1.565380 / 1.492716 (0.072663) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.260431 / 0.018006 (0.242425) | 0.444554 / 0.000490 (0.444064) | 0.016055 / 0.000200 (0.015855) | 0.000283 / 0.000054 (0.000228) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029130 / 0.037411 (-0.008281) | 0.112002 / 0.014526 (0.097476) | 0.120769 / 0.176557 (-0.055788) | 0.169345 / 0.737135 (-0.567790) | 0.129609 / 0.296338 (-0.166730) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.432211 / 0.215209 (0.217002) | 4.293008 / 2.077655 (2.215353) | 2.071291 / 1.504120 (0.567171) | 1.859322 / 1.541195 (0.318127) | 1.971434 / 1.468490 (0.502943) | 0.704042 / 4.584777 (-3.880735) | 3.791696 / 3.745712 (0.045983) | 3.142632 / 5.269862 (-2.127230) | 1.735151 / 4.565676 (-2.830525) | 0.086203 / 0.424275 (-0.338072) | 0.012542 / 0.007607 (0.004935) | 0.534870 / 0.226044 (0.308826) | 5.326042 / 2.268929 (3.057113) | 2.547960 / 55.444624 (-52.896664) | 2.212730 / 6.876477 (-4.663747) | 2.296177 / 2.142072 (0.154105) | 0.840311 / 4.805227 (-3.964917) | 0.168353 / 6.500664 (-6.332311) | 0.065949 / 0.075469 (-0.009520) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.255589 / 1.841788 (-0.586199) | 14.947344 / 8.074308 (6.873036) | 13.253721 / 10.191392 (3.062329) | 0.162349 / 0.680424 (-0.518075) | 0.017579 / 0.534201 (-0.516622) | 0.420758 / 0.579283 (-0.158525) | 0.430030 / 0.434364 (-0.004334) | 0.524669 / 0.540337 (-0.015669) | 0.623920 / 1.386936 (-0.763016) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#35b789e8f6826b6b5a6b48fcc2416c890a1f326a \"CML watermark\")\n" ]
2023-03-03T15:46:31Z
2023-03-03T21:57:18Z
2023-03-03T21:50:17Z
CONTRIBUTOR
null
null
null
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/3841370?v=4", "events_url": "https://api.github.com/users/gary149/events{/privacy}", "followers_url": "https://api.github.com/users/gary149/followers", "following_url": "https://api.github.com/users/gary149/following{/other_user}", "gists_url": "https://api.github.com/users/gary149/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gary149", "id": 3841370, "login": "gary149", "node_id": "MDQ6VXNlcjM4NDEzNzA=", "organizations_url": "https://api.github.com/users/gary149/orgs", "received_events_url": "https://api.github.com/users/gary149/received_events", "repos_url": "https://api.github.com/users/gary149/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gary149/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gary149/subscriptions", "type": "User", "url": "https://api.github.com/users/gary149", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5605/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5605/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5605.diff", "html_url": "https://github.com/huggingface/datasets/pull/5605", "merged_at": "2023-03-03T21:50:17Z", "patch_url": "https://github.com/huggingface/datasets/pull/5605.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5605" }
https://api.github.com/repos/huggingface/datasets/issues/4702
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4702/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4702/comments
https://api.github.com/repos/huggingface/datasets/issues/4702/events
https://github.com/huggingface/datasets/issues/4702
1,307,793,811
I_kwDODunzps5N81mT
4,702
Domain specific dataset discovery on the Hugging Face hub
{ "avatar_url": "https://avatars.githubusercontent.com/u/8995957?v=4", "events_url": "https://api.github.com/users/davanstrien/events{/privacy}", "followers_url": "https://api.github.com/users/davanstrien/followers", "following_url": "https://api.github.com/users/davanstrien/following{/other_user}", "gists_url": "https://api.github.com/users/davanstrien/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/davanstrien", "id": 8995957, "login": "davanstrien", "node_id": "MDQ6VXNlcjg5OTU5NTc=", "organizations_url": "https://api.github.com/users/davanstrien/orgs", "received_events_url": "https://api.github.com/users/davanstrien/received_events", "repos_url": "https://api.github.com/users/davanstrien/repos", "site_admin": false, "starred_url": "https://api.github.com/users/davanstrien/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davanstrien/subscriptions", "type": "User", "url": "https://api.github.com/users/davanstrien", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "Hi! I added a link to this issue in our internal request for adding keywords/topics to the Hub, which is identical to the `topic tags` solution. The `collections` solution seems too complex (as you point out). Regarding the `domain tags` solution, we primarily focus on machine learning, so I'm not sure if it's a good idea to make our current taxonomy more complex.", "> Hi! I added a link to this issue in our internal request for adding keywords/topics to the Hub, which is identical to the `topic tags` solution. The `collections` solution seems too complex (as you point out). Regarding the `domain tags` solution, we primarily focus on machine learning, so I'm not sure if it's a good idea to make our current taxonomy more complex.\r\n\r\nThanks, for letting me know. Will you allow the topic tags to be user-generated or only chosen from a list?", "Thanks for opening this issue @davanstrien.\r\n\r\nAs we discussed last week, the tag approach would be in principle the simpler to be implemented, either the domain tag (with closed vocabulary: more reliable but also more rigid), or the topic tag (with open vocabulary: more flexible for user needs)", "Hi @davanstrien If i remember correctly this was also discussed inside a hf.co Discussion, would you be able to link it here too?\r\n\r\n(where i suggested using `tags: - foo - bar` IIRC.\r\n\r\nThanks a ton!", "> Hi @davanstrien If i remember correctly this was also discussed inside a hf.co Discussion, would you be able to link it here too?\r\n> \r\n> (where i suggested using `tags: - foo - bar` IIRC.\r\n> \r\n> Thanks a ton!\r\n\r\nThis doesn't ring a bell - I did a quick search of https://discuss.huggingface.co but didn't find anything. \r\n\r\nThe `tags: ` approach sounds like a good option for this. It would be especially nice if these could suggest existing tags, but this probably won't be easily possible through the current interface. \r\n", "I opened a PR to add \"tags\" to the YAML validator:\r\nhttps://github.com/huggingface/datasets/pull/4716\r\n\r\nI also added \"tags\" to the [tagging app](https://huggingface.co/spaces/huggingface/datasets-tagging), with suggestions like \"bio\" or \"newspapers\"", "Thanks @lhoestq for the initiative.\r\n \r\nJust one question: are \"tags\" already supported on the Hub? \r\n\r\nI think they aren't. Thus, the Hub should support them so that they are properly displayed.", "I think they're not displayed, but at least it should enable users to filter by tag in using `huggingface_hub` or using the appropriate query params on the website (not sure if it's possible yet though)", "> I think they're not displayed, but at least it should enable users to filter by tag in using `huggingface_hub` or using the appropriate query params on the website (not sure if it's possible yet though)\r\n\r\nI think this would already be a helpful start. I'm happy to try this out with the datasets added to https://huggingface.co/organizations/biglam and use the `huggingface_hub` to filter those datasets using the tags. ", "Is this abandoned? \r\nI'm looking for a transport logistics dataset; how can I find one?", "@younes-io Full text search is probably your best bet: https://huggingface.co/search/full-text?type=dataset" ]
2022-07-18T11:14:03Z
2024-02-12T09:53:43Z
null
MEMBER
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
**Is your feature request related to a problem? Please describe.** ## The problem The datasets hub currently has `8,239` datasets. These datasets span a wide range of different modalities and tasks (currently with a bias towards textual data). There are various ways of identifying datasets that may be relevant for a particular use case: - searching - various filters Currently, however, there isn't an easy way to identify datasets belonging to a specific domain. For example, I want to browse machine learning datasets related to 'social science' or 'climate change research'. The ability to identify datasets relating to a specific domain has come up in discussions around the [BigLA](https://github.com/bigscience-workshop/lam/) datasets hackathon https://github.com/bigscience-workshop/lam/discussions/31#discussioncomment-3123610. As part of the hackathon, we're currently collecting datasets related to Libraries, Archives and Museums and making them available via the hub. We currently do this under a Hugging Face organization (https://huggingface.co/biglam). However, going forward, I can see some of these datasets being migrated to sit under an organization that is the custodian of the dataset (for example, a national library the data was originally from). At this point, it becomes more difficult to quickly identify datasets from this domain without relying on search. This is also related to some existing issues on Github related to metadata on the hub: - https://github.com/huggingface/datasets/issues/3625 - https://github.com/huggingface/datasets/issues/3877 **Describe the solution you'd like** ### Some possible solutions that may help with this: #### Enable domain tags (from a controlled vocabulary) - This would add metadata field to the YAML for the domain a dataset relates to - Advantages: - the list is controlled, allowing it to be more easily integrated into the datasets tag app (https://huggingface.co/space/huggingface/datasets-tagging) - the controlled vocabulary could align with an existing controlled vocabulary - this additional metadata can be used to perform filtering by domain - disadvantages - choosing the best controlled vocab may be difficult - there are many datasets that are likely to fit into the 'machine learning' domain (i.e. there is a long tail of datasets that aren't in more 'generic' machine learning domain #### Enable topic tags (user-generated) Enable 'free form' topic tags for datasets and models. This would be closer to GitHub's repository topics which can be chosen from a controlled list (https://github.com/topics/) but can also be more user/org specific. This could potentially be useful for organizations to also manage their own models and datasets as the number they hold in their org grows. For example, they may create 'topic tags' for a specific project, so it's clearer which datasets /models are related to that project. #### Collections This solution would likely be the biggest shift and may require significant changes in the hub fronted. Collections could work in several different ways but would include: Users can curate particular datasets, models, spaces, etc., into a collection. For example, they may create a collection of 'historic newspapers suitable for training language models'. These collections would not be mutually exclusive, i.e. a dataset can belong to zero, one or many collections. Collections can also potentially be nested under other collections. This is fairly common on other data reposotiores for example the following collections: <img width="293" alt="Screenshot 2022-07-18 at 11 50 44" src="https://user-images.githubusercontent.com/8995957/179496445-963ed122-5e26-4574-96e8-41081bce3e2b.png"> all belong under a higher level collection (https://bl.iro.bl.uk/collections/353c908d-b495-4413-b047-87236d2573e3?locale=en). There are different models one could use for how these collections could be created: - only within an org - for any dataset/model - the owner or a dataset/model has to agree to be added to a collection - a collection owner can have people suggest additions to their collection - other models.... These collections could be thematic, related to particular training approaches, curate models with particular inference properties etc. Whilst some of these features may duplicate current/or future tag filters on the hub, they offer the advantage of being flexible and not having to predict what users will want to do upfront. There is also potential for automating the creation of these collections based on existing metadata. For example, one could collect models trained on a collection of datasets so for example, if we had a collection of 'historic newspapers suitable for training language models' that contained 30 datasets, we could create another collection 'historic newspaper language models' that takes any model on the hub whose metadata says it used one or more of those 30 datasets. There is also the option of exploring ML approaches to suggest models/datasets may be relevant to a particular collection. This approach is likely to be quite difficult to implement well and would require significant thought. There is also likely to be a benefit in doing quite a bit of upfront work in curating useful collections to demonstrate the benefits of collections. **Describe alternatives you've considered** A clear and concise description of any alternative solutions or features you've considered. It is possible to collate this information externally, i.e. one could link back to the relevant models/datasets from an external platform. **Additional context** Add any other context about the feature request here. I'm cc'ing others involved in the BigLAM hackathon who may also have thoughts @cakiki @clancyoftheoverflow @albertvillanova
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/4702/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4702/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7041
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7041/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7041/comments
https://api.github.com/repos/huggingface/datasets/issues/7041/events
https://github.com/huggingface/datasets/issues/7041
2,404,576,038
I_kwDODunzps6PUusm
7,041
`sort` after `filter` unreasonably slow
{ "avatar_url": "https://avatars.githubusercontent.com/u/56711045?v=4", "events_url": "https://api.github.com/users/Tobin-rgb/events{/privacy}", "followers_url": "https://api.github.com/users/Tobin-rgb/followers", "following_url": "https://api.github.com/users/Tobin-rgb/following{/other_user}", "gists_url": "https://api.github.com/users/Tobin-rgb/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Tobin-rgb", "id": 56711045, "login": "Tobin-rgb", "node_id": "MDQ6VXNlcjU2NzExMDQ1", "organizations_url": "https://api.github.com/users/Tobin-rgb/orgs", "received_events_url": "https://api.github.com/users/Tobin-rgb/received_events", "repos_url": "https://api.github.com/users/Tobin-rgb/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Tobin-rgb/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Tobin-rgb/subscriptions", "type": "User", "url": "https://api.github.com/users/Tobin-rgb", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "`filter` add an indices mapping on top of the dataset, so `sort` has to gather all the rows that are kept to form a new Arrow table and sort the table. Gathering all the rows can take some time, but is a necessary step. You can try calling `ds = ds.flatten_indices()` before sorting to remove the indices mapping.", "> `filter` add an indices mapping on top of the dataset, so `sort` has to gather all the rows that are kept to form a new Arrow table and sort the table. Gathering all the rows can take some time, but is a necessary step. You can try calling `ds = ds.flatten_indices()` before sorting to remove the indices mapping.\n\nThis worked, thank you so much." ]
2024-07-12T03:29:27Z
2025-04-03T07:58:55Z
null
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug as the tittle says ... ### Steps to reproduce the bug `sort` seems to be normal. ```python from datasets import Dataset import random nums = [{"k":random.choice(range(0,1000))} for _ in range(100000)] ds = Dataset.from_list(nums) print("start sort") ds = ds.sort("k") print("finish sort") ``` but `sort` after `filter` is extremely slow. ```python from datasets import Dataset import random nums = [{"k":random.choice(range(0,1000))} for _ in range(100000)] ds = Dataset.from_list(nums) ds = ds.filter(lambda x:x > 100, input_columns="k") print("start sort") ds = ds.sort("k") print("finish sort") ``` ### Expected behavior Is this a bug, or is it a misuse of the `sort` function? ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-3.10.0-1127.19.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.10.13 - `huggingface_hub` version: 0.23.4 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2023.10.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7041/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7041/timeline
null
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/5265
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5265/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5265/comments
https://api.github.com/repos/huggingface/datasets/issues/5265/events
https://github.com/huggingface/datasets/issues/5265
1,455,274,864
I_kwDODunzps5Wvbtw
5,265
Get an IterableDataset from a map-style Dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "fef2c0", "default": false, "description": "", "id": 3287858981, "name": "streaming", "node_id": "MDU6TGFiZWwzMjg3ODU4OTgx", "url": "https://api.github.com/repos/huggingface/datasets/labels/streaming" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" } ]
null
[ "I think `stream` could be misleading since the data is not being streamed from remote endpoints (one could think that's the case when they see `load_dataset` followed by `stream`). Hence, I prefer the second option.\r\n\r\nPS: When we resolve https://github.com/huggingface/datasets/issues/4542, we could add `as_tf_dataset` to the API for consistency and deprecate `to_tf_dataset`." ]
2022-11-18T14:54:40Z
2023-02-01T16:36:03Z
2023-02-01T16:36:03Z
MEMBER
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
This is useful to leverage iterable datasets specific features like: - fast approximate shuffling - lazy map, filter etc. Iterating over the resulting iterable dataset should be at least as fast at iterating over the map-style dataset. Here are some ideas regarding the API: ```python # 1. # - consistency with load_dataset(..., streaming=True) # - gives intuition that map/filter/etc. are done on-the-fly ids = ds.stream() # 2. # - more explicit on the output type # - but maybe sounds like a conversion tool rather than a step in a processing pipeline ids = ds.as_iterable_dataset() ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5265/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5265/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/5774
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5774/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5774/comments
https://api.github.com/repos/huggingface/datasets/issues/5774/events
https://github.com/huggingface/datasets/pull/5774
1,676,716,662
PR_kwDODunzps5OxIMe
5,774
Fix style
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010336 / 0.011353 (-0.001017) | 0.007085 / 0.011008 (-0.003924) | 0.135577 / 0.038508 (0.097069) | 0.038301 / 0.023109 (0.015192) | 0.427919 / 0.275898 (0.152021) | 0.461451 / 0.323480 (0.137971) | 0.008929 / 0.007986 (0.000944) | 0.005260 / 0.004328 (0.000931) | 0.103481 / 0.004250 (0.099231) | 0.054885 / 0.037052 (0.017833) | 0.434956 / 0.258489 (0.176467) | 0.466915 / 0.293841 (0.173074) | 0.052403 / 0.128546 (-0.076144) | 0.021128 / 0.075646 (-0.054518) | 0.466847 / 0.419271 (0.047576) | 0.085096 / 0.043533 (0.041563) | 0.439935 / 0.255139 (0.184796) | 0.453613 / 0.283200 (0.170413) | 0.123913 / 0.141683 (-0.017769) | 1.930114 / 1.452155 (0.477959) | 2.052083 / 1.492716 (0.559366) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.280612 / 0.018006 (0.262606) | 0.583937 / 0.000490 (0.583447) | 0.004542 / 0.000200 (0.004342) | 0.000117 / 0.000054 (0.000063) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035901 / 0.037411 (-0.001510) | 0.160357 / 0.014526 (0.145831) | 0.141661 / 0.176557 (-0.034896) | 0.234915 / 0.737135 (-0.502220) | 0.164110 / 0.296338 (-0.132228) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.659901 / 0.215209 (0.444692) | 6.529102 / 2.077655 (4.451447) | 2.635324 / 1.504120 (1.131204) | 2.275777 / 1.541195 (0.734583) | 2.343205 / 1.468490 (0.874715) | 1.241310 / 4.584777 (-3.343467) | 5.683784 / 3.745712 (1.938072) | 3.377162 / 5.269862 (-1.892700) | 2.176404 / 4.565676 (-2.389273) | 0.144303 / 0.424275 (-0.279972) | 0.016352 / 0.007607 (0.008745) | 0.817383 / 0.226044 (0.591339) | 8.148356 / 2.268929 (5.879428) | 3.489277 / 55.444624 (-51.955347) | 2.848086 / 6.876477 (-4.028391) | 2.973304 / 2.142072 (0.831232) | 1.517821 / 4.805227 (-3.287407) | 0.278794 / 6.500664 (-6.221870) | 0.096385 / 0.075469 (0.020916) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.631693 / 1.841788 (-0.210095) | 19.564716 / 8.074308 (11.490408) | 23.583081 / 10.191392 (13.391689) | 0.252363 / 0.680424 (-0.428061) | 0.027644 / 0.534201 (-0.506557) | 0.579634 / 0.579283 (0.000351) | 0.645702 / 0.434364 (0.211338) | 0.667302 / 0.540337 (0.126965) | 0.766425 / 1.386936 (-0.620511) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011186 / 0.011353 (-0.000167) | 0.007327 / 0.011008 (-0.003681) | 0.105441 / 0.038508 (0.066933) | 0.040293 / 0.023109 (0.017184) | 0.480557 / 0.275898 (0.204659) | 0.522049 / 0.323480 (0.198569) | 0.007779 / 0.007986 (-0.000207) | 0.007338 / 0.004328 (0.003009) | 0.104744 / 0.004250 (0.100494) | 0.059463 / 0.037052 (0.022411) | 0.494055 / 0.258489 (0.235566) | 0.534340 / 0.293841 (0.240499) | 0.062800 / 0.128546 (-0.065746) | 0.020687 / 0.075646 (-0.054959) | 0.135833 / 0.419271 (-0.283439) | 0.087472 / 0.043533 (0.043939) | 0.465019 / 0.255139 (0.209880) | 0.526713 / 0.283200 (0.243513) | 0.131424 / 0.141683 (-0.010259) | 1.884759 / 1.452155 (0.432605) | 2.015817 / 1.492716 (0.523101) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237032 / 0.018006 (0.219026) | 0.605209 / 0.000490 (0.604719) | 0.006653 / 0.000200 (0.006453) | 0.000264 / 0.000054 (0.000210) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034982 / 0.037411 (-0.002430) | 0.141409 / 0.014526 (0.126883) | 0.151635 / 0.176557 (-0.024922) | 0.217298 / 0.737135 (-0.519837) | 0.171945 / 0.296338 (-0.124393) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.678596 / 0.215209 (0.463387) | 6.802432 / 2.077655 (4.724777) | 3.021617 / 1.504120 (1.517497) | 2.722508 / 1.541195 (1.181313) | 2.728194 / 1.468490 (1.259704) | 1.245863 / 4.584777 (-3.338914) | 5.762676 / 3.745712 (2.016963) | 5.497855 / 5.269862 (0.227994) | 2.855764 / 4.565676 (-1.709912) | 0.157359 / 0.424275 (-0.266916) | 0.015562 / 0.007607 (0.007955) | 0.865559 / 0.226044 (0.639515) | 8.553052 / 2.268929 (6.284123) | 3.905544 / 55.444624 (-51.539081) | 3.272528 / 6.876477 (-3.603949) | 3.399481 / 2.142072 (1.257408) | 1.540155 / 4.805227 (-3.265072) | 0.275871 / 6.500664 (-6.224793) | 0.092346 / 0.075469 (0.016877) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.753646 / 1.841788 (-0.088142) | 20.074050 / 8.074308 (11.999742) | 23.920391 / 10.191392 (13.728999) | 0.257161 / 0.680424 (-0.423263) | 0.027805 / 0.534201 (-0.506396) | 0.565605 / 0.579283 (-0.013678) | 0.643277 / 0.434364 (0.208914) | 0.633504 / 0.540337 (0.093167) | 0.754317 / 1.386936 (-0.632619) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2d34c7968ea1a3fe7d4fa7cdf23673e0354f69ac \"CML watermark\")\n" ]
2023-04-20T13:21:32Z
2023-04-20T13:34:26Z
2023-04-20T13:24:28Z
MEMBER
null
null
null
Fix C419 issues
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5774/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5774/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5774.diff", "html_url": "https://github.com/huggingface/datasets/pull/5774", "merged_at": "2023-04-20T13:24:28Z", "patch_url": "https://github.com/huggingface/datasets/pull/5774.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5774" }
https://api.github.com/repos/huggingface/datasets/issues/4947
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4947/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4947/comments
https://api.github.com/repos/huggingface/datasets/issues/4947/events
https://github.com/huggingface/datasets/pull/4947
1,364,967,957
PR_kwDODunzps4-hvbq
4,947
Try to fix the Windows CI after TF update 2.10
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4947). All of your documentation changes will be reflected on that endpoint." ]
2022-09-07T17:14:49Z
2023-09-24T10:05:38Z
2022-09-08T09:13:10Z
MEMBER
null
null
null
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4947/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4947/timeline
null
null
1
{ "diff_url": "https://github.com/huggingface/datasets/pull/4947.diff", "html_url": "https://github.com/huggingface/datasets/pull/4947", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4947.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4947" }
https://api.github.com/repos/huggingface/datasets/issues/6034
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6034/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6034/comments
https://api.github.com/repos/huggingface/datasets/issues/6034/events
https://github.com/huggingface/datasets/issues/6034
1,804,501,361
I_kwDODunzps5rjoFx
6,034
load_dataset hangs on WSL
{ "avatar_url": "https://avatars.githubusercontent.com/u/20140522?v=4", "events_url": "https://api.github.com/users/Andy-Zhou2/events{/privacy}", "followers_url": "https://api.github.com/users/Andy-Zhou2/followers", "following_url": "https://api.github.com/users/Andy-Zhou2/following{/other_user}", "gists_url": "https://api.github.com/users/Andy-Zhou2/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Andy-Zhou2", "id": 20140522, "login": "Andy-Zhou2", "node_id": "MDQ6VXNlcjIwMTQwNTIy", "organizations_url": "https://api.github.com/users/Andy-Zhou2/orgs", "received_events_url": "https://api.github.com/users/Andy-Zhou2/received_events", "repos_url": "https://api.github.com/users/Andy-Zhou2/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Andy-Zhou2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Andy-Zhou2/subscriptions", "type": "User", "url": "https://api.github.com/users/Andy-Zhou2", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Even if a dataset is cached, we still make requests to check whether the cache is up-to-date. [This](https://huggingface.co/docs/datasets/v2.13.1/en/loading#offline) section in the docs explains how to avoid them and directly load the cached version.", "Thanks - that works! However it doesn't resolve the original issue (but I am not sure if it is a WSL problem)", "We use `requests` to make HTTP requests (and `aiohttp` in the streaming mode), so I don't think we can provide much help regarding the socket issue (it probably has something to do with WSL). " ]
2023-07-14T09:03:10Z
2023-07-14T14:48:29Z
2023-07-14T14:48:29Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug load_dataset simply hangs. It happens once every ~5 times, and interestingly hangs for a multiple of 5 minutes (hangs for 5/10/15 minutes). Using the profiler in PyCharm shows that it spends the time at <method 'connect' of '_socket.socket' objects>. However, a local cache is available so I am not sure why socket is needed. ([profiler result](https://ibb.co/0Btbbp8)) It only happens on WSL for me. It works for native Windows and my MacBook. (cache quickly recognized and loaded within a second). ### Steps to reproduce the bug I am using Ubuntu 22.04.2 LTS (GNU/Linux 5.15.90.1-microsoft-standard-WSL2 x86_64) Python 3.10.10 (main, Mar 21 2023, 18:45:11) [GCC 11.2.0] on linux >>> import datasets >>> datasets.load_dataset('ai2_arc', 'ARC-Challenge') # hangs for 5/10/15 minutes ### Expected behavior cache quickly recognized and loaded within a second ### Environment info Please let me know if I should provide more environment information.
{ "avatar_url": "https://avatars.githubusercontent.com/u/20140522?v=4", "events_url": "https://api.github.com/users/Andy-Zhou2/events{/privacy}", "followers_url": "https://api.github.com/users/Andy-Zhou2/followers", "following_url": "https://api.github.com/users/Andy-Zhou2/following{/other_user}", "gists_url": "https://api.github.com/users/Andy-Zhou2/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Andy-Zhou2", "id": 20140522, "login": "Andy-Zhou2", "node_id": "MDQ6VXNlcjIwMTQwNTIy", "organizations_url": "https://api.github.com/users/Andy-Zhou2/orgs", "received_events_url": "https://api.github.com/users/Andy-Zhou2/received_events", "repos_url": "https://api.github.com/users/Andy-Zhou2/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Andy-Zhou2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Andy-Zhou2/subscriptions", "type": "User", "url": "https://api.github.com/users/Andy-Zhou2", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6034/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6034/timeline
null
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/5063
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5063/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5063/comments
https://api.github.com/repos/huggingface/datasets/issues/5063/events
https://github.com/huggingface/datasets/pull/5063
1,395,895,463
PR_kwDODunzps5AHasG
5,063
Align signature of list_repo_files with latest hfh
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-10-04T08:51:46Z
2022-10-07T16:42:57Z
2022-10-07T16:40:16Z
MEMBER
null
null
null
This PR aligns the signature of `list_repo_files` with the current one in `hfh`, by renaming deprecated `token` to `use_auth_token`. This is already the case for `dataset_info`. CC: @lhoestq
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5063/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5063/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5063.diff", "html_url": "https://github.com/huggingface/datasets/pull/5063", "merged_at": "2022-10-07T16:40:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/5063.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5063" }
https://api.github.com/repos/huggingface/datasets/issues/6235
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6235/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6235/comments
https://api.github.com/repos/huggingface/datasets/issues/6235/events
https://github.com/huggingface/datasets/issues/6235
1,893,337,083
I_kwDODunzps5w2gf7
6,235
Support multiprocessing for download/extract nestedly
{ "avatar_url": "https://avatars.githubusercontent.com/u/22725729?v=4", "events_url": "https://api.github.com/users/hgt312/events{/privacy}", "followers_url": "https://api.github.com/users/hgt312/followers", "following_url": "https://api.github.com/users/hgt312/following{/other_user}", "gists_url": "https://api.github.com/users/hgt312/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hgt312", "id": 22725729, "login": "hgt312", "node_id": "MDQ6VXNlcjIyNzI1NzI5", "organizations_url": "https://api.github.com/users/hgt312/orgs", "received_events_url": "https://api.github.com/users/hgt312/received_events", "repos_url": "https://api.github.com/users/hgt312/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hgt312/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hgt312/subscriptions", "type": "User", "url": "https://api.github.com/users/hgt312", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[]
2023-09-12T21:51:08Z
2023-09-12T21:51:08Z
null
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Feature request Current multiprocessing for download/extract is not done nestedly. For example, when processing SlimPajama, there is only 3 processes (for train/test/val), while there are many files inside these 3 folders ``` Downloading data files #0: 0%| | 0/1 [00:00<?, ?obj/s] Downloading data files #1: 0%| | 0/1 [00:00<?, ?obj/s] Downloading data files #2: 0%| | 0/1 [00:00<?, ?obj/s] Extracting data files #0: 0%| | 0/1 [00:00<?, ?obj/s] Extracting data files #1: 0%| | 0/1 [00:00<?, ?obj/s] Extracting data files #2: 0%| | 0/1 [00:00<?, ?obj/s] ``` ### Motivation speedup dataset loading ### Your contribution I can help test the feature
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6235/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6235/timeline
null
null
null
null