url
string | repository_url
string | labels_url
string | comments_url
string | events_url
string | html_url
string | id
int64 | node_id
string | number
int64 | title
string | user
dict | labels
list | state
string | locked
bool | assignee
dict | assignees
list | milestone
dict | comments
list | created_at
timestamp[ns, tz=UTC] | updated_at
timestamp[ns, tz=UTC] | closed_at
timestamp[ns, tz=UTC] | author_association
string | type
float64 | active_lock_reason
float64 | sub_issues_summary
dict | body
string | closed_by
dict | reactions
dict | timeline_url
string | performed_via_github_app
float64 | state_reason
string | draft
float64 | pull_request
dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/7211
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7211/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7211/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7211/events
|
https://github.com/huggingface/datasets/issues/7211
| 2,576,400,502
|
I_kwDODunzps6ZkMB2
| 7,211
|
Describe only selected fields in README
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/67658835?v=4",
"events_url": "https://api.github.com/users/alozowski/events{/privacy}",
"followers_url": "https://api.github.com/users/alozowski/followers",
"following_url": "https://api.github.com/users/alozowski/following{/other_user}",
"gists_url": "https://api.github.com/users/alozowski/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alozowski",
"id": 67658835,
"login": "alozowski",
"node_id": "MDQ6VXNlcjY3NjU4ODM1",
"organizations_url": "https://api.github.com/users/alozowski/orgs",
"received_events_url": "https://api.github.com/users/alozowski/received_events",
"repos_url": "https://api.github.com/users/alozowski/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alozowski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alozowski/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alozowski",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[] | 2024-10-09T16:25:47Z
| 2024-10-09T16:25:47Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
Hi Datasets team!
Is it possible to add the ability to describe only selected fields of the dataset files in `README.md`? For example, I have this open dataset ([open-llm-leaderboard/results](https://huggingface.co/datasets/open-llm-leaderboard/results?row=0)) and I want to describe only some fields in order not to overcomplicate the Dataset Preview and filter out some fields
### Motivation
The `Results` dataset for the Open LLM Leaderboard contains json files with a complex nested structure. I would like to add `README.md` there to use the SQL console, for example. But if I describe the structure of this dataset completely, it will overcomplicate the use of Dataset Preview and the total number of columns will exceed 50
### Your contribution
I'm afraid I'm not familiar with the project structure, so I won't be able to open a PR, but I'll try to help with something else if possible
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7211/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7211/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/7374
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7374/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7374/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7374/events
|
https://github.com/huggingface/datasets/pull/7374
| 2,793,442,320
|
PR_kwDODunzps6IC66n
| 7,374
|
Remove .h5 from imagefolder extensions
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-01-16T18:17:24Z
| 2025-01-16T18:26:40Z
| 2025-01-16T18:26:38Z
|
MEMBER
| null | null | null |
the format is not relevant for imagefolder, and makes the viewer fail to process datasets on HF (so many that the viewer takes more time to process new datasets)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7374/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7374/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7374.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7374",
"merged_at": "2025-01-16T18:26:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7374.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7374"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5221
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5221/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5221/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5221/events
|
https://github.com/huggingface/datasets/issues/5221
| 1,442,309,094
|
I_kwDODunzps5V9-Pm
| 5,221
|
Cannot push
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/43239645?v=4",
"events_url": "https://api.github.com/users/bayartsogt-ya/events{/privacy}",
"followers_url": "https://api.github.com/users/bayartsogt-ya/followers",
"following_url": "https://api.github.com/users/bayartsogt-ya/following{/other_user}",
"gists_url": "https://api.github.com/users/bayartsogt-ya/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bayartsogt-ya",
"id": 43239645,
"login": "bayartsogt-ya",
"node_id": "MDQ6VXNlcjQzMjM5NjQ1",
"organizations_url": "https://api.github.com/users/bayartsogt-ya/orgs",
"received_events_url": "https://api.github.com/users/bayartsogt-ya/received_events",
"repos_url": "https://api.github.com/users/bayartsogt-ya/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bayartsogt-ya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bayartsogt-ya/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bayartsogt-ya",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Did you run `huggingface-cli lfs-enable-largefiles` before committing or before adding ? Maybe you can try before adding\r\n\r\nAnyway I'd encourage you to split your data into several TAR archives if possible, this way the dataset can loaded faster using multiprocessing (by giving each process a subset of shards to process)",
"@lhoestq \r\nThanks for the help!\r\n> Maybe you can try before adding\r\n\r\nIt did not help\r\n\r\nBut I totally got your point about split into multiple TAR archives. It really helped!"
] | 2022-11-09T15:32:05Z
| 2022-11-10T18:11:21Z
| 2022-11-10T18:11:11Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I am facing the issue when I try to push the tar.gz file around 11G to HUB.
```
(venv) ╭─laptop@laptop ~/PersonalProjects/data/ulaanbal_v0 ‹main●›
╰─$ du -sh *
4.0K README.md
13G data
516K test.jsonl
18M train.jsonl
4.0K ulaanbal_v0.py
11G ulaanbal_v0.tar.gz
452K validation.jsonl
(venv) ╭─laptop@laptop~/PersonalProjects/data/ulaanbal_v0 ‹main●›
╰─$ git add ulaanbal_v0.tar.gz && git commit -m 'large version'
(venv) ╭─laptop@laptop ~/PersonalProjects/data/ulaanbal_v0 ‹main●›
╰─$ git push
EOFoading LFS objects: 0% (0/1), 0 B | 0 B/s
Uploading LFS objects: 0% (0/1), 0 B | 0 B/s, done.
error: failed to push some refs to 'https://huggingface.co/datasets/bayartsogt/ulaanbal_v0'
```
I have already tried pushing a small version of this and it was working fine. So my guess it is probably because of the big file.
Following I run before the commit:
```
╰─$ git lfs install
╰─$ huggingface-cli lfs-enable-largefiles .
```
### Steps to reproduce the bug
Create a private dataset on huggingface and push 12G tar.gz file
### Expected behavior
To be pushed with no issue
### Environment info
- `datasets` version: 2.6.1
- Platform: Darwin-21.6.0-x86_64-i386-64bit
- Python version: 3.7.11
- PyArrow version: 10.0.0
- Pandas version: 1.3.5
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/43239645?v=4",
"events_url": "https://api.github.com/users/bayartsogt-ya/events{/privacy}",
"followers_url": "https://api.github.com/users/bayartsogt-ya/followers",
"following_url": "https://api.github.com/users/bayartsogt-ya/following{/other_user}",
"gists_url": "https://api.github.com/users/bayartsogt-ya/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bayartsogt-ya",
"id": 43239645,
"login": "bayartsogt-ya",
"node_id": "MDQ6VXNlcjQzMjM5NjQ1",
"organizations_url": "https://api.github.com/users/bayartsogt-ya/orgs",
"received_events_url": "https://api.github.com/users/bayartsogt-ya/received_events",
"repos_url": "https://api.github.com/users/bayartsogt-ya/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bayartsogt-ya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bayartsogt-ya/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bayartsogt-ya",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5221/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5221/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6279
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6279/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6279/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6279/events
|
https://github.com/huggingface/datasets/issues/6279
| 1,928,028,226
|
I_kwDODunzps5y62BC
| 6,279
|
Batched IterableDataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7010688?v=4",
"events_url": "https://api.github.com/users/lneukom/events{/privacy}",
"followers_url": "https://api.github.com/users/lneukom/followers",
"following_url": "https://api.github.com/users/lneukom/following{/other_user}",
"gists_url": "https://api.github.com/users/lneukom/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lneukom",
"id": 7010688,
"login": "lneukom",
"node_id": "MDQ6VXNlcjcwMTA2ODg=",
"organizations_url": "https://api.github.com/users/lneukom/orgs",
"received_events_url": "https://api.github.com/users/lneukom/received_events",
"repos_url": "https://api.github.com/users/lneukom/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lneukom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lneukom/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lneukom",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[
"This is exactly what I was looking for. It would also be very useful for me :-)",
"This issue is really smashing the selling point of HF datasets... The only workaround I've found so far is to create a customized IterableDataloader which improves the loading speed to some extent.\r\n\r\nFor example I've a HF dataset `dt_train` with `len(dt_train) == 1M`. Using plain DataLoader is extremely slow:\r\n```\r\n%%time\r\ndl_train = DataLoader(dt_train, batch_size=128, shuffle = True)\r\nfor batch in dl_train:\r\n pass\r\n``` \r\n\r\n```\r\nCPU times: user 24min 35s, sys: 704 ms, total: 24min 36s\r\nWall time: 24min 37s\r\n```\r\nAnd DataLoader works even worse with HF's iterable_dataset:\r\n```\r\n%%time\r\ndt_train_ = dt_train.with_format(None).to_iterable_dataset(num_shards=64).shuffle(buffer_size=10_000)\r\ndl_train = DataLoader(dt_train_, batch_size=128)\r\nfor batch in dl_train:\r\n pass\r\n```\r\n```\r\nCPU times: user 1h 6min 2s, sys: 4.28 s, total: 1h 6min 6s\r\nWall time: 1h 7min 53s\r\n```\r\nWorkaround by running a customized wrapper:\r\n```\r\n%%time\r\nfrom torch.utils.data import DataLoader, IterableDataset\r\n\r\nclass Dataset2Iterable(IterableDataset):\r\n \"\"\"\r\n Wrapper to use a HF dataset as pytorch IterableDataset to speed up data loading.\r\n \"\"\"\r\n def __init__(self, dataset, batch_size=1, shuffle=True):\r\n super(Dataset2Iterable).__init__()\r\n self.dataset = dataset\r\n self.batch_size = batch_size\r\n self.shuffle = shuffle\r\n\r\n def __iter__(self):\r\n if self.shuffle: self.dataset.shuffle()\r\n return self.dataset.iter(batch_size=self.batch_size)\r\n\r\ndl_train = DataLoader(Dataset2Iterable(dt_train, batch_size = 128), batch_size=1, num_workers=0)\r\nfor n in range(2):\r\n for batch in dl_train:\r\n pass\r\n```\r\nThe speed still is slower than using tensorflow's loader but improved a lot than previous code:\r\n```\r\nCPU times: user 4min 18s, sys: 0 ns, total: 4min 18s\r\nWall time: 4min 20s\r\n```\r\nNote that the way I implemented `Dataset2Iterable` will only work with `num_workers=0`.",
"I can confirm that @zhh210's solution works with `num_workers=0`. However, for my use case, this was still slower than tokenizing on the fly through a collator and leveraging multiple workers in the dataloder.\r\n\r\n@lhoestq I think this is an important use case (e.g., streaming from a large dataset, online or stored on disk). What do you think might be the best solution to move forward?",
"I guess it can be implemented using a batched`.map()` under the hood that returns a single item containing the input batch.\r\n\r\nIn the meantime you can use this:\r\n\r\n```python\r\ndef batch(unbatched: dict[str, list]) -> dict[str, list]:\r\n return {k: [v] for k, v in unbatched}\r\n\r\nbatched_dataset = dataset.map(batch, batched=True, batch_size=batch_size)\r\n```\r\n\r\nThough it would be great to have a `.batch()` method indeed, I'd be happy to help with anyone wants to open a PR",
"If no one else is planning to work on this, I can take it on. I'll wait until next week, and if no one has started a PR by then, I'll go ahead and open one.",
"It looks like the implementation of IterableDataset is still using a hardcoded batch size of 1. For example in line 2063 in [`/datsets/src/datasets/iterable_dataset.py`](https://github.com/huggingface/datasets/blob/3.0.2/src/datasets/iterable_dataset.py#L2063). Iterating over IterableDataset with large batch sizes therefore remains slow, even when using `batch()`. I guess then the data are not being read from one contiguous chunk of memory. Instead every example is retrieved one by one, leading to long dataloading times. As a minimal example: Load c4 dataset and iterate over it with a large batch size.\r\n```python\r\nimport datasets\r\nfrom timeit import default_timer as timer\r\nc4 = datasets.load_dataset(\"allenai/c4\", \"en\", streaming=True, split=\"train\")\r\nc4_batched = c4.batch(512**2) # use large batch size\r\niterator = iter(c4_batched)\r\nfor i in range(5):\r\n start_time=timer()\r\n next(iterator) # get next batch\r\n end_time = timer()\r\n print(f\"time for one batch: {end_time-start_time}\")\r\n```\r\nThis results in the following output for me:\r\ntime for one batch: 12.615376660600305\r\ntime for one batch: 13.011422813870013\r\ntime for one batch: 14.157325950451195\r\ntime for one batch: 14.225894245319068\r\ntime for one batch: 13.898222777992487\r\n\r\nBecause I want to use my IterableDataset with the pytorch dataloader I rewrote the `__iter_pytorch__` and the `__iter__` functions like so and am getting much faster dataloading times. I marked the lines I changed with \"# changed here\":\r\n\r\n```python\r\nfrom datasets.iterable_dataset import _convert_to_arrow\r\nfrom datasets.formatting import TensorFormatter, get_formatter\r\nfrom datasets.features.features import cast_to_python_objects\r\nimport sys\r\nimport fsspec.asyn\r\nfrom itertools import islice\r\nfrom datasets.utils.logging import get_logger\r\nfrom datasets.iterable_dataset import _examples_to_batch, _apply_feature_types_on_batch, _apply_feature_types_on_example\r\n\r\nlogger = get_logger(__name__)\r\n\r\ndef __iter__(self):\r\n if \"torch\" in sys.modules:\r\n import torch.utils.data\r\n\r\n worker_info = torch.utils.data.get_worker_info()\r\n if isinstance(self, torch.utils.data.IterableDataset) and worker_info is not None:\r\n # We're a torch.utils.data.IterableDataset in a PyTorch worker process\r\n yield from self._iter_pytorch()\r\n return\r\n\r\n ex_iterable = self._prepare_ex_iterable_for_iteration(batch_size=self.batch_size, drop_last_batch=self.drop_last_batch) # changed here\r\n if self._formatting:\r\n formatter = get_formatter(self._formatting.format_type, features=self.features)\r\n format_dict = (\r\n formatter.recursive_tensorize if isinstance(formatter, TensorFormatter) else cast_to_python_objects\r\n )\r\n else:\r\n format_dict = None\r\n\r\n if self._formatting and (ex_iterable.iter_arrow or self._formatting.format_type == \"arrow\"):\r\n if ex_iterable.iter_arrow:\r\n iterator = ex_iterable.iter_arrow()\r\n else:\r\n iterator = _convert_to_arrow(ex_iterable, batch_size=self.batch_size) # changed here\r\n for key, pa_table in iterator:\r\n yield formatter.format_row(pa_table)\r\n return\r\n\r\n for key, example in ex_iterable:\r\n if self.features and not ex_iterable.is_typed:\r\n # `IterableDataset` automatically fills missing columns with None.\r\n # This is done with `_apply_feature_types_on_example`.\r\n example = _apply_feature_types_on_example(\r\n example, self.features, token_per_repo_id=self._token_per_repo_id\r\n )\r\n yield format_dict(example) if format_dict else example\r\n\r\n\r\n\r\ndef _iter_pytorch(self):\r\n ex_iterable = self._prepare_ex_iterable_for_iteration(batch_size=self.batch_size, drop_last_batch=self.drop_last_batch) # changed here\r\n # Fix for fsspec when using multiprocess to avoid hanging in the ML training loop. (only required for fsspec >= 0.9.0)\r\n # See https://github.com/fsspec/gcsfs/issues/379\r\n fsspec.asyn.reset_lock()\r\n # check if there aren't too many workers\r\n import torch.utils.data\r\n\r\n worker_info = torch.utils.data.get_worker_info()\r\n if self._is_main_process() and ex_iterable.n_shards < worker_info.num_workers:\r\n logger.warning(\r\n f\"Too many dataloader workers: {worker_info.num_workers} (max is dataset.n_shards={ex_iterable.n_shards}). \"\r\n f\"Stopping {worker_info.num_workers - ex_iterable.n_shards} dataloader workers.\"\r\n )\r\n logger.info(\r\n f\"To parallelize data loading, we give each process some shards (or data sources) to process. \"\r\n f\"Therefore it's unnecessary to have a number of workers greater than dataset.n_shards={ex_iterable.n_shards}. \"\r\n f\"To enable more parallelism, please split the dataset in more files than {ex_iterable.n_shards}.\"\r\n )\r\n # split workload\r\n _log_prefix = f\"node#{self._distributed.rank} \" if self._distributed else \"\"\r\n shards_indices = ex_iterable.split_shard_indices_by_worker(worker_info.id, worker_info.num_workers)\r\n if shards_indices:\r\n logger.debug(\r\n f\"{_log_prefix}dataloader worker#{worker_info.id}, ': Starting to iterate over {len(shards_indices)}/{ex_iterable.n_shards} shards.\"\r\n )\r\n ex_iterable = ex_iterable.shard_data_sources(worker_id=worker_info.id, num_workers=worker_info.num_workers)\r\n self._state_dict = ex_iterable._init_state_dict()\r\n if self._starting_state_dict:\r\n ex_iterable.load_state_dict(self._starting_state_dict)\r\n\r\n if self._formatting:\r\n formatter = get_formatter(self._formatting.format_type, features=self.features)\r\n format_dict = (\r\n formatter.recursive_tensorize if isinstance(formatter, TensorFormatter) else cast_to_python_objects\r\n )\r\n else:\r\n format_dict = None\r\n\r\n if self._formatting and (ex_iterable.iter_arrow or self._formatting == \"arrow\"):\r\n if ex_iterable.iter_arrow:\r\n iterator = ex_iterable.iter_arrow()\r\n else:\r\n iterator = _convert_to_arrow(ex_iterable, batch_size=self.batch_size) # changed here\r\n if self.batch_size > 1: # changed here until end of file\r\n for key, pa_table in iterator:\r\n yield formatter.format_batch(pa_table)\r\n return\r\n else:\r\n for key, pa_table in iterator:\r\n yield formatter.format_row(pa_table)\r\n return\r\n\r\n iterator = iter(ex_iterable)\r\n if self.batch_size > 1:\r\n for key, example in iterator:\r\n # If batched, first build the batch\r\n examples = [example] + [example for key, example in islice(iterator, self.batch_size - 1)]\r\n if self.drop_last_batch and len(examples) < self.batch_size: # ignore last batch\r\n return\r\n batch = _examples_to_batch(examples)\r\n if self.features and not ex_iterable.is_typed:\r\n # `IterableDataset` automatically fills missing columns with None.\r\n # This is done with `_apply_feature_types_on_batch`.\r\n batch = _apply_feature_types_on_batch(batch, self.features, token_per_repo_id=self._token_per_repo_id)\r\n yield format_dict(batch) if format_dict else batch\r\n else:\r\n for key, example in ex_iterable:\r\n if self.features and not ex_iterable.is_typed:\r\n # `IterableDataset` automatically fills missing columns with None.\r\n # This is done with `_apply_feature_types_on_example`.\r\n example = _apply_feature_types_on_example(\r\n example, self.features, token_per_repo_id=self._token_per_repo_id\r\n )\r\n yield format_dict(example) if format_dict else example\r\n logger.debug(\r\n f\"{_log_prefix}dataloader worker#{worker_info.id}, ': Finished iterating over {len(shards_indices)}/{ex_iterable.n_shards} shards.\"\r\n )\r\n else:\r\n logger.debug(\r\n f\"{_log_prefix}dataloader worker#{worker_info.id}, ': Stopping... Number of dataset shards < num_workers ({ex_iterable.n_shards}<{worker_info.num_workers}).\"\r\n )\r\n```\r\n\r\nFor anyone wanting to try it you can patch it into datasets by overwriting the function via `setattr(datasets.IterableDataset, '_iter_pytorch', _iter_pytorch)`\r\n\r\nI don't really know what most of the rest of the code is doing so no idea if this is a valid fix or not, but it seems to work for me.\r\nExample of running the fix:\r\n```python\r\nfrom torch.utils.data.dataloader import DataLoader\r\nc4.batch_size = 512**2 # set batch size here\r\ndataloader = Dataloader(c4, batch_size=None) # use custom batching from IterableDataset\r\niterator = iter(dataloader)\r\nfor i in range(5):\r\n start_time=timer()\r\n next(iterator) #get the batch\r\n end_time = timer()\r\n print(f\"time for one batch: {end_time-start_time}\")\r\n```\r\nI now get \r\ntime for one batch: 0.6047679269686341\r\ntime for one batch: 0.000248616561293602\r\ntime for one batch: 0.00017435848712921143\r\ntime for one batch: 0.00015910807996988297\r\ntime for one batch: 0.00015317369252443314\r\n\r\nI love the datasets library and it would be great if iterating with large batch sizes would be supported directly, either with a similar fix to mine or in some other way :)",
"Hi @taczin , thanks for reporting !\r\n\r\nIndeed the `IterableDataset.batch()` implementation is quite naive is manipulates python objects:\r\n\r\nhttps://github.com/huggingface/datasets/blob/d37ed46ebf45981131bd3678173dbb4b7e2b2f1a/src/datasets/iterable_dataset.py#L3026-L3029\r\n\r\nHowever it can be much faster if it can be applied on the Arrow data, maybe using something like this (untested)\r\n\r\n```python\r\ndef batch_fn(unbatched): \r\n return {k: [v] for k, v in unbatched.items()} \r\n\r\ndef batch_fn_arrow(unbatched_pa_table): \r\n offsets = pa.array([0, len(unbatched_pa_table)])\r\n return pa.Table.from_arrays([\r\n pa.ListArray.from_arrays(offsets, unbatched_pa_table[k])\r\n for k in unbatched_pa_table.column_names\r\n ], unbatched_pa_table.column_names)\r\n\r\nif self._ex_iterabe.iter_arrow:\r\n return self.with_format(\"arrow\").map(\r\n batch_fn_arrow, batched=True, batch_size=batch_size, drop_last_batch=drop_last_batch\r\n ).with_format(self._formatting.format_type if self._formatting else None)\r\nelse:\r\n return self.map(batch_fn, batched=True, batch_size=batch_size, drop_last_batch=drop_last_batch)\r\n```",
"Hi @lhoestq , thanks for your answer. I was wondering: is there a reason why the internal call to `ex_iterable = self._prepare_ex_iterable_for_iteration()` in the IterableDataset code does not pass the batch size even though it could? If not passed the default of 1 is used, leading to the observed slow loading.",
"After calling `.batch()`, `_prepare_ex_iterable_for_iteration` should use batch_size=1 since now each row in the dataset is actually a batch of the original dataset."
] | 2023-10-05T11:12:49Z
| 2024-11-07T10:01:22Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
Hi,
could you add an implementation of a batched `IterableDataset`. It already support an option to do batch iteration via `.iter(batch_size=...)` but this cannot be used in combination with a torch `DataLoader` since it just returns an iterator.
### Motivation
The current implementation loads each element of a batch individually which can be very slow in cases of a big batch_size. I did some experiments [here](https://discuss.huggingface.co/t/slow-dataloader-with-big-batch-size/57224) and using a batched iteration would speed up data loading significantly.
### Your contribution
N/A
| null |
{
"+1": 7,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 7,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6279/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6279/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5825
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5825/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5825/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5825/events
|
https://github.com/huggingface/datasets/issues/5825
| 1,697,327,483
|
I_kwDODunzps5lKyl7
| 5,825
|
FileNotFound even though exists
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/62820084?v=4",
"events_url": "https://api.github.com/users/Muennighoff/events{/privacy}",
"followers_url": "https://api.github.com/users/Muennighoff/followers",
"following_url": "https://api.github.com/users/Muennighoff/following{/other_user}",
"gists_url": "https://api.github.com/users/Muennighoff/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Muennighoff",
"id": 62820084,
"login": "Muennighoff",
"node_id": "MDQ6VXNlcjYyODIwMDg0",
"organizations_url": "https://api.github.com/users/Muennighoff/orgs",
"received_events_url": "https://api.github.com/users/Muennighoff/received_events",
"repos_url": "https://api.github.com/users/Muennighoff/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Muennighoff/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Muennighoff/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Muennighoff",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi! \r\n\r\nThis would only work if `bigscience/xP3` was a no-code dataset, but it isn't (it has a Python builder script).\r\n\r\nBut this should work: \r\n```python\r\nload_dataset(\"json\", data_files=\"https://huggingface.co/datasets/bigscience/xP3/resolve/main/ur/xp3_facebook_flores_spa_Latn-urd_Arab_devtest_ab-spa_Latn-urd_Arab.jsonl\")\r\n```\r\n\r\n",
"I see, it's not compatible w/ regex right?\r\ne.g.\r\n`load_dataset(\"json\", data_files=\"https://huggingface.co/datasets/bigscience/xP3/resolve/main/ur/*\")`",
"> I see, it's not compatible w/ regex right? e.g. `load_dataset(\"json\", data_files=\"https://huggingface.co/datasets/bigscience/xP3/resolve/main/ur/*\")`\r\n\r\nIt should work for patterns that \"reference\" the local filesystem, but to make this work with the Hub, we must implement https://github.com/huggingface/datasets/issues/5281 first.\r\n\r\nIn the meantime, you can fetch these glob files with `HfFileSystem` and pass them as a list to `load_dataset`:\r\n```python\r\nfrom datasets import load_dataset\r\nfrom huggingface_hub import HfFileSystem, hf_hub_url # `HfFileSystem` requires the latest version of `huggingface_hub`\r\n\r\nfs = HfFileSystem()\r\nglob_files = fs.glob(\"datasets/bigscience/xP3/ur/*\")\r\n# convert fsspec URLs to HTTP URLs\r\nresolved_paths = [fs.resolve_path(file) for file in glob_files]\r\ndata_files = [hf_hub_url(resolved_path.repo_id, resolved_path.path_in_repo, repo_type=resolved_path.repo_type) for resolved_path in resolved_paths]\r\n\r\nds = load_dataset(\"json\", data_files=data_files)\r\n```",
"This works using `load_dataset(\"json\", data_files=\"hf://datasets/bigscience/xP3/ur/*\")` now, closing"
] | 2023-05-05T09:49:55Z
| 2023-08-16T10:02:01Z
| 2023-08-16T10:02:01Z
|
CONTRIBUTOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I'm trying to download https://huggingface.co/datasets/bigscience/xP3/resolve/main/ur/xp3_facebook_flores_spa_Latn-urd_Arab_devtest_ab-spa_Latn-urd_Arab.jsonl which works fine in my webbrowser, but somehow not with datasets. Am I doing sth wrong?
```
Downloading builder script: 100%
2.82k/2.82k [00:00<00:00, 64.2kB/s]
Downloading readme: 100%
12.6k/12.6k [00:00<00:00, 585kB/s]
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
[<ipython-input-2-4b45446a91d5>](https://localhost:8080/#) in <cell line: 4>()
2 lang = "ur"
3 fname = "xp3_facebook_flores_spa_Latn-urd_Arab_devtest_ab-spa_Latn-urd_Arab.jsonl"
----> 4 dataset = load_dataset("bigscience/xP3", data_files=f"{lang}/{fname}")
6 frames
[/usr/local/lib/python3.10/dist-packages/datasets/data_files.py](https://localhost:8080/#) in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions)
291 if allowed_extensions is not None:
292 error_msg += f" with any supported extension {list(allowed_extensions)}"
--> 293 raise FileNotFoundError(error_msg)
294 return sorted(out)
295
FileNotFoundError: Unable to find 'https://huggingface.co/datasets/bigscience/xP3/resolve/main/ur/xp3_facebook_flores_spa_Latn-urd_Arab_devtest_ab-spa_Latn-urd_Arab.jsonl' at /content/https:/huggingface.co/datasets/bigscience/xP3/resolve/main
```
### Steps to reproduce the bug
```
!pip install -q datasets
from datasets import load_dataset
lang = "ur"
fname = "xp3_facebook_flores_spa_Latn-urd_Arab_devtest_ab-spa_Latn-urd_Arab.jsonl"
dataset = load_dataset("bigscience/xP3", data_files=f"{lang}/{fname}")
```
### Expected behavior
Correctly downloads
### Environment info
latest versions
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5825/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5825/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/4929
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4929/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4929/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4929/events
|
https://github.com/huggingface/datasets/pull/4929
| 1,361,508,366
|
PR_kwDODunzps4-WK2w
| 4,929
|
Fixes a typo in loading documentation
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7144772?v=4",
"events_url": "https://api.github.com/users/sighingnow/events{/privacy}",
"followers_url": "https://api.github.com/users/sighingnow/followers",
"following_url": "https://api.github.com/users/sighingnow/following{/other_user}",
"gists_url": "https://api.github.com/users/sighingnow/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sighingnow",
"id": 7144772,
"login": "sighingnow",
"node_id": "MDQ6VXNlcjcxNDQ3NzI=",
"organizations_url": "https://api.github.com/users/sighingnow/orgs",
"received_events_url": "https://api.github.com/users/sighingnow/received_events",
"repos_url": "https://api.github.com/users/sighingnow/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sighingnow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sighingnow/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sighingnow",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-09-05T07:18:54Z
| 2022-09-06T02:11:03Z
| 2022-09-05T13:06:38Z
|
CONTRIBUTOR
| null | null | null |
As show in the [documentation page](https://huggingface.co/docs/datasets/loading) here the `"tr"in` should be `"train`.

|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4929/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4929/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4929.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4929",
"merged_at": "2022-09-05T13:06:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4929.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4929"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4668
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4668/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4668/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4668/events
|
https://github.com/huggingface/datasets/issues/4668
| 1,299,735,893
|
I_kwDODunzps5NeGVV
| 4,668
|
Dataset Viewer issue for hungnm/multilingual-amazon-review-sentiment-processed
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ghost",
"id": 10137,
"login": "ghost",
"node_id": "MDQ6VXNlcjEwMTM3",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"repos_url": "https://api.github.com/users/ghost/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ghost",
"user_view_type": "public"
}
|
[
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
}
] | null |
[
"It seems like a private dataset. The viewer is currently not supported on the private datasets."
] | 2022-07-09T18:04:13Z
| 2022-07-11T07:47:47Z
| 2022-07-11T07:47:47Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Link
https://huggingface.co/hungnm/multilingual-amazon-review-sentiment
### Description
_No response_
### Owner
Yes
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4668/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4668/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5298
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5298/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5298/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5298/events
|
https://github.com/huggingface/datasets/issues/5298
| 1,464,681,871
|
I_kwDODunzps5XTUWP
| 5,298
|
Bug in xopen with Windows pathnames
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null |
[] | 2022-11-25T15:21:32Z
| 2022-11-29T08:21:25Z
| 2022-11-29T08:21:25Z
|
MEMBER
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
Currently, `xopen` function has a bug with local Windows pathnames:
From its implementation:
```python
def xopen(file: str, mode="r", *args, **kwargs):
file = _as_posix(PurePath(file))
main_hop, *rest_hops = file.split("::")
if is_local_path(main_hop):
return open(file, mode, *args, **kwargs)
```
On a Windows machine, if we pass the argument:
```python
xopen("C:\\Users\\USERNAME\\filename.txt")
```
it returns
```python
open("C:/Users/USERNAME/filename.txt")
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5298/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5298/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/4618
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4618/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4618/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4618/events
|
https://github.com/huggingface/datasets/issues/4618
| 1,292,078,225
|
I_kwDODunzps5NA4yR
| 4,618
|
contribute data loading for object detection datasets with yolo data format
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8406903?v=4",
"events_url": "https://api.github.com/users/faizankshaikh/events{/privacy}",
"followers_url": "https://api.github.com/users/faizankshaikh/followers",
"following_url": "https://api.github.com/users/faizankshaikh/following{/other_user}",
"gists_url": "https://api.github.com/users/faizankshaikh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/faizankshaikh",
"id": 8406903,
"login": "faizankshaikh",
"node_id": "MDQ6VXNlcjg0MDY5MDM=",
"organizations_url": "https://api.github.com/users/faizankshaikh/orgs",
"received_events_url": "https://api.github.com/users/faizankshaikh/received_events",
"repos_url": "https://api.github.com/users/faizankshaikh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/faizankshaikh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/faizankshaikh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/faizankshaikh",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[
"Hi! The `imagefolder` script is already quite complex, so a standalone script sounds better. Also, I suggest we create an org on the Hub (e.g. `hf-loaders`) and store such scripts there for easier maintenance rather than having them as packaged modules (IMO only very generic loaders should be packaged). WDYT @lhoestq @albertvillanova @polinaeterna?",
"@mariosasko sounds good to me!\r\n",
"Thank you for the suggestion @mariosasko . I agree with the point, but I have a few doubts\r\n\r\n1. How would the user access the script if it's not a part of the core codebase?\r\n2. Could you direct me as to what will be the tasks I have to do to contribute to the code? As per my understanding, it would be like\r\n 1. Create a new org \"hf-loaders\" and add you (and more HF people) to the org\r\n 2. Add data loader script as a (model?)\r\n 3. Test it with a dataset on HF hub\r\n3. We should maybe brainstorm as to which public datasets have this format (YOLO type) and are the most important ones to test the script with. We can even add the datasets on HF Hub alongside the script",
"1. Like this: `load_dataset(\"hf-loaders/yolo\", data_files=...)`\r\n2. The steps would be:\r\n 1. Create a new org `hf-community-loaders` (IMO a better name than \"hf-loaders\") and add me (as an admin)\r\n 2. Create a new dataset repo `yolo` and add the loading script to it (`yolo.py`)\r\n 3. Open a discussion to request our review\r\n4. I like this idea. Another option is to add snippets that describe how to load such datasets using the `yolo` loader."
] | 2022-07-02T15:21:59Z
| 2022-07-21T14:10:44Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
**Is your feature request related to a problem? Please describe.**
At the moment, HF datasets loads [image classification datasets](https://huggingface.co/docs/datasets/image_process) out-of-the-box. There could be a data loader for loading standard object detection datasets ([original discussion here](https://huggingface.co/datasets/jalFaizy/detect_chess_pieces/discussions/2))
**Describe the solution you'd like**
I wrote a [custom script](https://huggingface.co/datasets/jalFaizy/detect_chess_pieces/blob/main/detect_chess_pieces.py) to load dataset which has YOLO data format.
**Describe alternatives you've considered**
The script can either be a standalone dataset builder, or a modified version of `ImageFolder`
**Additional context**
I would be happy to contribute to this, but I would do it at a very slow pace (maybe a month or two) as I have my exams approaching 😄
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4618/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4618/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5823
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5823/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5823/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5823/events
|
https://github.com/huggingface/datasets/issues/5823
| 1,697,024,789
|
I_kwDODunzps5lJosV
| 5,823
|
[2.12.0] DatasetDict.save_to_disk not saving to S3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5233185?v=4",
"events_url": "https://api.github.com/users/thejamesmarq/events{/privacy}",
"followers_url": "https://api.github.com/users/thejamesmarq/followers",
"following_url": "https://api.github.com/users/thejamesmarq/following{/other_user}",
"gists_url": "https://api.github.com/users/thejamesmarq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thejamesmarq",
"id": 5233185,
"login": "thejamesmarq",
"node_id": "MDQ6VXNlcjUyMzMxODU=",
"organizations_url": "https://api.github.com/users/thejamesmarq/orgs",
"received_events_url": "https://api.github.com/users/thejamesmarq/received_events",
"repos_url": "https://api.github.com/users/thejamesmarq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thejamesmarq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thejamesmarq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thejamesmarq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi ! Can you try adding the `s3://` prefix ?\r\n```python\r\nf\"s3://{s3_bucket}/{s3_dir}/{dataset_name}\"\r\n```",
"Ugh, yeah that was it. Thank you!",
"Hi @thejamesmarq, by any chance, did you use multiprocessing `num_proc > 1` when saving your dataset on the s3 bucket ? I'm struggling making it work in a multiprocessing setting while everything works fine with one processor."
] | 2023-05-05T05:22:59Z
| 2024-05-30T16:11:31Z
| 2023-05-05T15:01:17Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
When trying to save a `DatasetDict` to a private S3 bucket using `save_to_disk`, the artifacts are instead saved locally, and not in the S3 bucket.
I have tried using the deprecated `fs` as well as the `storage_options` arguments and I get the same results.
### Steps to reproduce the bug
1. Create a DatsetDict `dataset`
2. Create a S3FileSystem object
`s3 = datasets.filesystems.S3FileSystem(key=aws_access_key_id, secret=aws_secret_access_key)`
3. Save using `dataset_dict.save_to_disk(f"{s3_bucket}/{s3_dir}/{dataset_name}", storage_options=s3.storage_options)` or `dataset_dict.save_to_disk(f"{s3_bucket}/{s3_dir}/{dataset_name}", fs=s3)`
4. Check the corresponding S3 bucket and verify nothing has been uploaded
5. Check the path at f"{s3_bucket}/{s3_dir}/{dataset_name}" and verify that files have been saved there
### Expected behavior
Artifacts are uploaded at the f"{s3_bucket}/{s3_dir}/{dataset_name}" S3 location.
### Environment info
- `datasets` version: 2.12.0
- Platform: macOS-13.3.1-x86_64-i386-64bit
- Python version: 3.11.2
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5233185?v=4",
"events_url": "https://api.github.com/users/thejamesmarq/events{/privacy}",
"followers_url": "https://api.github.com/users/thejamesmarq/followers",
"following_url": "https://api.github.com/users/thejamesmarq/following{/other_user}",
"gists_url": "https://api.github.com/users/thejamesmarq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thejamesmarq",
"id": 5233185,
"login": "thejamesmarq",
"node_id": "MDQ6VXNlcjUyMzMxODU=",
"organizations_url": "https://api.github.com/users/thejamesmarq/orgs",
"received_events_url": "https://api.github.com/users/thejamesmarq/received_events",
"repos_url": "https://api.github.com/users/thejamesmarq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thejamesmarq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thejamesmarq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thejamesmarq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5823/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5823/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6360
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6360/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6360/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6360/events
|
https://github.com/huggingface/datasets/issues/6360
| 1,965,672,950
|
I_kwDODunzps51Kcn2
| 6,360
|
Add support for `Sequence(Audio/Image)` feature in `push_to_hub`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/21087104?v=4",
"events_url": "https://api.github.com/users/Laurent2916/events{/privacy}",
"followers_url": "https://api.github.com/users/Laurent2916/followers",
"following_url": "https://api.github.com/users/Laurent2916/following{/other_user}",
"gists_url": "https://api.github.com/users/Laurent2916/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Laurent2916",
"id": 21087104,
"login": "Laurent2916",
"node_id": "MDQ6VXNlcjIxMDg3MTA0",
"organizations_url": "https://api.github.com/users/Laurent2916/orgs",
"received_events_url": "https://api.github.com/users/Laurent2916/received_events",
"repos_url": "https://api.github.com/users/Laurent2916/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Laurent2916/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Laurent2916/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Laurent2916",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
] | null |
[
"This issue stems from https://github.com/huggingface/datasets/blob/6d2f2a5e0fea3827eccfd1717d8021c15fc4292a/src/datasets/table.py#L2203-L2205\r\n\r\nI'll address it as part of https://github.com/huggingface/datasets/pull/6283.\r\n\r\nIn the meantime, this should work\r\n\r\n```python\r\nimport pyarrow as pa\r\nfrom datasets import Image\r\n\r\ndataset = dataset.with_format(\"arrow\")\r\n\r\ndef embed_images(pa_table):\r\n images_arr = pa.chunked_array(\r\n [\r\n pa.ListArray.from_arrays(chunk.offsets, Image().embed_storage(chunk.values), mask=chunk.is_null())\r\n for chunk in pa_table[\"images\"].chunks\r\n ]\r\n )\r\n return pa_table.set_column(pa_table.schema.get_field_index(\"images\"), \"images\", images_arr)\r\n\r\ndataset = dataset.map(embed_images, batched=True)\r\n\r\ndataset = dataset.with_format(\"python\")\r\n\r\ndataset.push_to_hub(...)\r\n```"
] | 2023-10-27T14:39:57Z
| 2024-02-06T19:24:20Z
| 2024-02-06T19:24:20Z
|
CONTRIBUTOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
Allow for `Sequence` of `Image` (or `Audio`) to be embedded inside the shards.
### Motivation
Currently, thanks to #3685, when `embed_external_files` is set to True (which is the default) in `push_to_hub`, features of type `Image` and `Audio` are embedded inside the arrow/parquet shards, instead of only storing paths to the files.
I've noticed that this behavior does not extend to `Sequence` of `Image`, when working with a [dataset of timelapse images](https://huggingface.co/datasets/1aurent/Human-Embryo-Timelapse).
### Your contribution
I'll submit a PR if I find a way to add this feature
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6360/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6360/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/4814
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4814/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4814/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4814/events
|
https://github.com/huggingface/datasets/issues/4814
| 1,333,356,230
|
I_kwDODunzps5PeWbG
| 4,814
|
Support CSV as metadata file format in AudioFolder/ImageFolder
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
] | null |
[] | 2022-08-09T14:36:49Z
| 2022-08-31T11:59:08Z
| 2022-08-31T11:59:08Z
|
COLLABORATOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
Requested here: https://discuss.huggingface.co/t/how-to-structure-an-image-dataset-repo-using-the-image-folder-approach/21004. CSV is also used in AutoTrain for specifying metadata in image datasets.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4814/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4814/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6450
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6450/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6450/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6450/events
|
https://github.com/huggingface/datasets/issues/6450
| 2,009,491,386
|
I_kwDODunzps53xme6
| 6,450
|
Support multiple image/audio columns in ImageFolder/AudioFolder
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
}
|
[
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
},
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] | null |
[
"A duplicate of https://github.com/huggingface/datasets/issues/5760"
] | 2023-11-24T10:34:09Z
| 2023-11-28T11:07:17Z
| 2023-11-24T17:24:38Z
|
COLLABORATOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
Have a metadata.csv file with multiple columns that point to relative image or audio files.
### Motivation
Currently, ImageFolder allows one column, called `file_name`, pointing to relative image files. On the same model, AudioFolder allows one column, called `file_name`, pointing to relative audio files.
But it's not possible to have two image columns, or to have two audio column, or to have one audio column and one image column.
### Your contribution
no specific contribution
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6450/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6450/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6163
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6163/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6163/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6163/events
|
https://github.com/huggingface/datasets/issues/6163
| 1,857,682,241
|
I_kwDODunzps5uuftB
| 6,163
|
Error type: ArrowInvalid Details: Failed to parse string: '[254,254]' as a scalar of type int32
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/90616801?v=4",
"events_url": "https://api.github.com/users/shishirCTC/events{/privacy}",
"followers_url": "https://api.github.com/users/shishirCTC/followers",
"following_url": "https://api.github.com/users/shishirCTC/following{/other_user}",
"gists_url": "https://api.github.com/users/shishirCTC/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shishirCTC",
"id": 90616801,
"login": "shishirCTC",
"node_id": "MDQ6VXNlcjkwNjE2ODAx",
"organizations_url": "https://api.github.com/users/shishirCTC/orgs",
"received_events_url": "https://api.github.com/users/shishirCTC/received_events",
"repos_url": "https://api.github.com/users/shishirCTC/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shishirCTC/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shishirCTC/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shishirCTC",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Answered on the forum [here](https://discuss.huggingface.co/t/error-type-arrowinvalid-details-failed-to-parse-string-254-254-as-a-scalar-of-type-int32/51323)."
] | 2023-08-19T11:34:40Z
| 2023-08-21T13:28:16Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I am getting the following error while I am trying to upload the CSV sheet to train a model. My CSV sheet content is exactly same as shown in the example CSV file in the Auto Train page. Attaching screenshot of error for reference. I have also tried converting the index of the answer that are integer into string by placing inverted commas and also without inverted commas.
Can anyone please help me out?
FYI : I am using Chrome browser.
Error type: ArrowInvalid
Details: Failed to parse string: '[254,254]' as a scalar of type int32

### Steps to reproduce the bug
Kindly let me know how to fix this?
### Expected behavior
Kindly let me know how to fix this?
### Environment info
Kindly let me know how to fix this?
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6163/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6163/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5022
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5022/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5022/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5022/events
|
https://github.com/huggingface/datasets/pull/5022
| 1,385,432,859
|
PR_kwDODunzps4_kxYe
| 5,022
|
Fix languages of X-CSQA configs in xcsr dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks @lhoestq, I had missed that... ",
"thx for the super fast work @albertvillanova ! any estimate for when the relevant release will happen?\r\n\r\nThanks again ",
"@thesofakillers after a recent change in our library (see #4059), now fixes in all datasets are immediately accessible. You can try it:\r\n```python\r\nfrench = datasets.load_dataset(\"xcsr\", \"X-CSQA-fr\")\r\n```\r\n\r\nPlease note there is an additional fix to that dataset in progress (to be merged today):\r\n- #5024"
] | 2022-09-26T05:13:39Z
| 2022-09-26T12:27:20Z
| 2022-09-26T10:57:30Z
|
MEMBER
| null | null | null |
Fix #5017.
CC: @yangxqiao, @yuchenlin
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5022/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5022/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5022.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5022",
"merged_at": "2022-09-26T10:57:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5022.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5022"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7343
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7343/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7343/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7343/events
|
https://github.com/huggingface/datasets/issues/7343
| 2,750,525,823
|
I_kwDODunzps6j8bF_
| 7,343
|
[Bug] Inconsistent behavior of data_files and data_dir in load_dataset method.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/74161960?v=4",
"events_url": "https://api.github.com/users/JasonCZH4/events{/privacy}",
"followers_url": "https://api.github.com/users/JasonCZH4/followers",
"following_url": "https://api.github.com/users/JasonCZH4/following{/other_user}",
"gists_url": "https://api.github.com/users/JasonCZH4/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JasonCZH4",
"id": 74161960,
"login": "JasonCZH4",
"node_id": "MDQ6VXNlcjc0MTYxOTYw",
"organizations_url": "https://api.github.com/users/JasonCZH4/orgs",
"received_events_url": "https://api.github.com/users/JasonCZH4/received_events",
"repos_url": "https://api.github.com/users/JasonCZH4/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JasonCZH4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JasonCZH4/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JasonCZH4",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi ! `data_files` with a list is equivalent to `data_files={\"train\": data_files}` with a train test only.\r\n\r\nWhen no split are specified, they are inferred based on file names, and files with no apparent split are ignored",
"Thanks for your reply!\r\n`files with no apparent split are ignored`. Is there a option that I can choose to ignored it or not as I mention aboved? Thanks!",
"To include all the files, the best way is to pass `data_files` yourself. There is no option to disable split detection at the moment",
"Thanks! I hope you guys can consider adding this option in the future. :)"
] | 2024-12-19T14:31:27Z
| 2025-01-03T15:54:09Z
| 2025-01-03T15:54:09Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Inconsistent operation of data_files and data_dir in load_dataset method.
### Steps to reproduce the bug
# First
I have three files, named 'train.json', 'val.json', 'test.json'.
Each one has a simple dict `{text:'aaa'}`.
Their path are `/data/train.json`, `/data/val.json`, `/data/test.json`
I load dataset with `data_files` argument:
```py
files = [os.path.join('./data',file) for file in os.listdir('./data')]
ds = load_dataset(
path='json',
data_files=files,)
```
And I get:
```py
DatasetDict({
train: Dataset({
features: ['text'],
num_rows: 3
})
})
```
However, If I load dataset with `data_dir` argument:
```py
ds = load_dataset(
path='json',
data_dir='./data',)
```
And I get:
```py
DatasetDict({
train: Dataset({
features: ['text'],
num_rows: 1
})
validation: Dataset({
features: ['text'],
num_rows: 1
})
test: Dataset({
features: ['text'],
num_rows: 1
})
})
```
Two results are not the same. Their behaviors are not equal, even if the statement [here](https://github.com/huggingface/datasets/blob/d0c152a979d91cc34b605c0298aebc650ab7dd27/src/datasets/load.py#L1790) said that their behaviors are equal.
# Second
If some filename include 'test' while others do not, `load_dataset` only return `test` dataset and others files are **abandoned**.
Given two files named `test.json` and `1.json`
Each one has a simple dict `{text:'aaa'}`.
I load the dataset using:
```py
ds = load_dataset(
path='json',
data_dir='./data',)
```
Only `test` is returned, `1.json` is missing:
```py
DatasetDict({
test: Dataset({
features: ['text'],
num_rows: 1
})
})
```
Things do not change even I manually set `split='train'`
### Expected behavior
1. Fix the above bugs.
2. Although the document says that load_dataset method will `Find which file goes into which split (e.g. train/test) based on file and directory names or on the YAML configuration`, I hope I can manually decide whether to do so. Sometimes users may accidentally put a `test` string in the filename but they just want a single `train` dataset. If the number of files in `data_dir` is huge, it's not easy to find out what cause the second situation metioned above.
### Environment info
datasets==3.2.0
Ubuntu18.84
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/74161960?v=4",
"events_url": "https://api.github.com/users/JasonCZH4/events{/privacy}",
"followers_url": "https://api.github.com/users/JasonCZH4/followers",
"following_url": "https://api.github.com/users/JasonCZH4/following{/other_user}",
"gists_url": "https://api.github.com/users/JasonCZH4/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JasonCZH4",
"id": 74161960,
"login": "JasonCZH4",
"node_id": "MDQ6VXNlcjc0MTYxOTYw",
"organizations_url": "https://api.github.com/users/JasonCZH4/orgs",
"received_events_url": "https://api.github.com/users/JasonCZH4/received_events",
"repos_url": "https://api.github.com/users/JasonCZH4/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JasonCZH4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JasonCZH4/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JasonCZH4",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7343/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7343/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6714
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6714/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6714/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6714/events
|
https://github.com/huggingface/datasets/pull/6714
| 2,167,569,080
|
PR_kwDODunzps5ooQd2
| 6,714
|
Expand no-code dataset info with datasets-server info
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6714). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005237 / 0.011353 (-0.006116) | 0.003614 / 0.011008 (-0.007394) | 0.063349 / 0.038508 (0.024841) | 0.027297 / 0.023109 (0.004187) | 0.236203 / 0.275898 (-0.039695) | 0.260029 / 0.323480 (-0.063451) | 0.003096 / 0.007986 (-0.004889) | 0.003342 / 0.004328 (-0.000987) | 0.048703 / 0.004250 (0.044453) | 0.043121 / 0.037052 (0.006069) | 0.257491 / 0.258489 (-0.000998) | 0.282861 / 0.293841 (-0.010980) | 0.027701 / 0.128546 (-0.100845) | 0.010634 / 0.075646 (-0.065012) | 0.207369 / 0.419271 (-0.211903) | 0.035799 / 0.043533 (-0.007734) | 0.240445 / 0.255139 (-0.014694) | 0.261977 / 0.283200 (-0.021223) | 0.018175 / 0.141683 (-0.123508) | 1.143964 / 1.452155 (-0.308191) | 1.230057 / 1.492716 (-0.262659) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096656 / 0.018006 (0.078650) | 0.303434 / 0.000490 (0.302944) | 0.000225 / 0.000200 (0.000025) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018454 / 0.037411 (-0.018957) | 0.061792 / 0.014526 (0.047266) | 0.073384 / 0.176557 (-0.103172) | 0.120148 / 0.737135 (-0.616988) | 0.074221 / 0.296338 (-0.222118) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.290291 / 0.215209 (0.075082) | 2.822908 / 2.077655 (0.745254) | 1.483139 / 1.504120 (-0.020981) | 1.349619 / 1.541195 (-0.191576) | 1.356588 / 1.468490 (-0.111902) | 0.571723 / 4.584777 (-4.013054) | 2.402696 / 3.745712 (-1.343016) | 2.832215 / 5.269862 (-2.437647) | 1.794962 / 4.565676 (-2.770714) | 0.062707 / 0.424275 (-0.361568) | 0.004997 / 0.007607 (-0.002610) | 0.343093 / 0.226044 (0.117049) | 3.383028 / 2.268929 (1.114100) | 1.818624 / 55.444624 (-53.626000) | 1.549859 / 6.876477 (-5.326618) | 1.667838 / 2.142072 (-0.474235) | 0.648574 / 4.805227 (-4.156653) | 0.119181 / 6.500664 (-6.381484) | 0.042074 / 0.075469 (-0.033395) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.982039 / 1.841788 (-0.859748) | 11.411759 / 8.074308 (3.337451) | 9.783405 / 10.191392 (-0.407987) | 0.129577 / 0.680424 (-0.550847) | 0.014091 / 0.534201 (-0.520110) | 0.297925 / 0.579283 (-0.281358) | 0.263884 / 0.434364 (-0.170480) | 0.346032 / 0.540337 (-0.194305) | 0.444806 / 1.386936 (-0.942130) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005527 / 0.011353 (-0.005826) | 0.003677 / 0.011008 (-0.007332) | 0.050245 / 0.038508 (0.011737) | 0.030070 / 0.023109 (0.006961) | 0.272640 / 0.275898 (-0.003258) | 0.296555 / 0.323480 (-0.026925) | 0.004247 / 0.007986 (-0.003738) | 0.003833 / 0.004328 (-0.000495) | 0.049341 / 0.004250 (0.045091) | 0.046604 / 0.037052 (0.009552) | 0.282765 / 0.258489 (0.024276) | 0.314924 / 0.293841 (0.021084) | 0.029749 / 0.128546 (-0.098797) | 0.010524 / 0.075646 (-0.065122) | 0.057859 / 0.419271 (-0.361412) | 0.053172 / 0.043533 (0.009640) | 0.274906 / 0.255139 (0.019767) | 0.290566 / 0.283200 (0.007366) | 0.019299 / 0.141683 (-0.122384) | 1.164092 / 1.452155 (-0.288062) | 1.205074 / 1.492716 (-0.287642) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093943 / 0.018006 (0.075936) | 0.298746 / 0.000490 (0.298256) | 0.000232 / 0.000200 (0.000032) | 0.000054 / 0.000054 (-0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022098 / 0.037411 (-0.015313) | 0.075523 / 0.014526 (0.060997) | 0.086784 / 0.176557 (-0.089773) | 0.124610 / 0.737135 (-0.612525) | 0.087743 / 0.296338 (-0.208595) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.298555 / 0.215209 (0.083346) | 2.951493 / 2.077655 (0.873838) | 1.611448 / 1.504120 (0.107328) | 1.481503 / 1.541195 (-0.059692) | 1.497937 / 1.468490 (0.029447) | 0.580402 / 4.584777 (-4.004375) | 2.433308 / 3.745712 (-1.312404) | 2.712717 / 5.269862 (-2.557145) | 1.766286 / 4.565676 (-2.799391) | 0.063973 / 0.424275 (-0.360303) | 0.005006 / 0.007607 (-0.002601) | 0.354541 / 0.226044 (0.128497) | 3.486448 / 2.268929 (1.217519) | 1.972779 / 55.444624 (-53.471846) | 1.709018 / 6.876477 (-5.167458) | 1.864242 / 2.142072 (-0.277831) | 0.678213 / 4.805227 (-4.127014) | 0.119525 / 6.500664 (-6.381140) | 0.041387 / 0.075469 (-0.034082) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.021337 / 1.841788 (-0.820451) | 12.049563 / 8.074308 (3.975255) | 10.424701 / 10.191392 (0.233309) | 0.131444 / 0.680424 (-0.548980) | 0.015644 / 0.534201 (-0.518557) | 0.293712 / 0.579283 (-0.285571) | 0.279160 / 0.434364 (-0.155204) | 0.327991 / 0.540337 (-0.212346) | 0.435455 / 1.386936 (-0.951481) |\n\n</details>\n</details>\n\n\n"
] | 2024-03-04T19:18:10Z
| 2024-03-04T20:28:30Z
| 2024-03-04T20:22:15Z
|
COLLABORATOR
| null | null | null |
E.g., to have info about a dataset's number of examples for more informative TQDM bars.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6714/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6714/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6714.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6714",
"merged_at": "2024-03-04T20:22:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6714.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6714"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7274
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7274/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7274/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7274/events
|
https://github.com/huggingface/datasets/pull/7274
| 2,629,882,821
|
PR_kwDODunzps6ArEt-
| 7,274
|
[MINOR:TYPO] Fix typo in exception text
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4",
"events_url": "https://api.github.com/users/cakiki/events{/privacy}",
"followers_url": "https://api.github.com/users/cakiki/followers",
"following_url": "https://api.github.com/users/cakiki/following{/other_user}",
"gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cakiki",
"id": 3664563,
"login": "cakiki",
"node_id": "MDQ6VXNlcjM2NjQ1NjM=",
"organizations_url": "https://api.github.com/users/cakiki/orgs",
"received_events_url": "https://api.github.com/users/cakiki/received_events",
"repos_url": "https://api.github.com/users/cakiki/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cakiki/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cakiki",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2024-11-01T21:15:29Z
| 2024-11-01T21:15:54Z
| null |
CONTRIBUTOR
| null | null | null | null | null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7274/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7274/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7274.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7274",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7274.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7274"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7458
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7458/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7458/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7458/events
|
https://github.com/huggingface/datasets/issues/7458
| 2,925,403,528
|
I_kwDODunzps6uXh2I
| 7,458
|
Loading the `laion/filtered-wit` dataset in streaming mode fails on v3.4.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/23343961?v=4",
"events_url": "https://api.github.com/users/nikita-savelyevv/events{/privacy}",
"followers_url": "https://api.github.com/users/nikita-savelyevv/followers",
"following_url": "https://api.github.com/users/nikita-savelyevv/following{/other_user}",
"gists_url": "https://api.github.com/users/nikita-savelyevv/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nikita-savelyevv",
"id": 23343961,
"login": "nikita-savelyevv",
"node_id": "MDQ6VXNlcjIzMzQzOTYx",
"organizations_url": "https://api.github.com/users/nikita-savelyevv/orgs",
"received_events_url": "https://api.github.com/users/nikita-savelyevv/received_events",
"repos_url": "https://api.github.com/users/nikita-savelyevv/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nikita-savelyevv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nikita-savelyevv/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nikita-savelyevv",
"user_view_type": "public"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
] | null |
[
"thanks for reporting, I released 3.4.1 with a fix"
] | 2025-03-17T14:54:02Z
| 2025-03-17T16:02:04Z
| 2025-03-17T15:25:55Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Loading https://huggingface.co/datasets/laion/filtered-wit in streaming mode fails after update to `datasets==3.4.0`. The dataset loads fine on v3.3.2.
### Steps to reproduce the bug
Steps to reproduce:
```
pip install datastes==3.4.0
python -c "from datasets import load_dataset; load_dataset('laion/filtered-wit', split='train', streaming=True)"
```
Results in:
```
$ python -c "from datasets import load_dataset; load_dataset('laion/filtered-wit', split='train', streaming=True)"
Repo card metadata block was not found. Setting CardData to empty.
Resolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████| 560/560 [00:00<00:00, 2280.24it/s]
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/load.py", line 2080, in load_dataset
return builder_instance.as_streaming_dataset(split=split)
File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/builder.py", line 1265, in as_streaming_dataset
splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}
File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 49, in _split_generators
data_files = dl_manager.download_and_extract(self.config.data_files)
File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 169, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 121, in extract
urlpaths = map_nested(self._extract, url_or_urls, map_tuple=True)
File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 496, in map_nested
mapped = [
File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 497, in <listcomp>
map_nested(
File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 513, in map_nested
mapped = [
File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 514, in <listcomp>
_single_map_nested((function, obj, batched, batch_size, types, None, True, None))
File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 375, in _single_map_nested
return function(data_struct)
File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 131, in _extract
raise NotImplementedError(
NotImplementedError: Extraction protocol for TAR archives like 'hf://datasets/laion/filtered-wit@c38ca7464e9934d9a49f88b3f60f5ad63b245465/data/00000.tar' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead.
Example usage:
url = dl_manager.download(url)
tar_archive_iterator = dl_manager.iter_archive(url)
for filename, file in tar_archive_iterator:
...
```
### Expected behavior
Dataset loads successfully.
### Environment info
Ubuntu 20.04.6. Python 3.9. Datasets 3.4.0.
pip freeze:
```
aiohappyeyeballs==2.6.1
aiohttp==3.11.14
aiosignal==1.3.2
async-timeout==5.0.1
attrs==25.3.0
certifi==2025.1.31
charset-normalizer==3.4.1
datasets==3.4.0
dill==0.3.8
filelock==3.18.0
frozenlist==1.5.0
fsspec==2024.12.0
huggingface-hub==0.29.3
idna==3.10
multidict==6.1.0
multiprocess==0.70.16
numpy==2.0.2
packaging==24.2
pandas==2.2.3
propcache==0.3.0
pyarrow==19.0.1
python-dateutil==2.9.0.post0
pytz==2025.1
PyYAML==6.0.2
requests==2.32.3
six==1.17.0
tqdm==4.67.1
typing_extensions==4.12.2
tzdata==2025.1
urllib3==2.3.0
xxhash==3.5.0
yarl==1.18.3
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7458/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7458/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6406
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6406/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6406/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6406/events
|
https://github.com/huggingface/datasets/issues/6406
| 1,990,469,045
|
I_kwDODunzps52pCW1
| 6,406
|
CI Build PR Documentation is broken: ImportError: cannot import name 'TypeAliasType' from 'typing_extensions'
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2023-11-13T11:36:10Z
| 2023-11-14T10:05:36Z
| 2023-11-14T10:05:36Z
|
MEMBER
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
Our CI Build PR Documentation is broken. See: https://github.com/huggingface/datasets/actions/runs/6799554060/job/18486828777?pr=6390
```
ImportError: cannot import name 'TypeAliasType' from 'typing_extensions'
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6406/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6406/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/7116
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7116/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7116/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7116/events
|
https://github.com/huggingface/datasets/issues/7116
| 2,475,522,721
|
I_kwDODunzps6TjXqh
| 7,116
|
datasets cannot handle nested json if features is given.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/38550511?v=4",
"events_url": "https://api.github.com/users/ljw20180420/events{/privacy}",
"followers_url": "https://api.github.com/users/ljw20180420/followers",
"following_url": "https://api.github.com/users/ljw20180420/following{/other_user}",
"gists_url": "https://api.github.com/users/ljw20180420/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ljw20180420",
"id": 38550511,
"login": "ljw20180420",
"node_id": "MDQ6VXNlcjM4NTUwNTEx",
"organizations_url": "https://api.github.com/users/ljw20180420/orgs",
"received_events_url": "https://api.github.com/users/ljw20180420/received_events",
"repos_url": "https://api.github.com/users/ljw20180420/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ljw20180420/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ljw20180420/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ljw20180420",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi ! `Sequence` has a weird behavior for dictionaries (from tensorflow-datasets), use a regular list instead:\r\n\r\n```python\r\nds = datasets.load_dataset('json', data_files=\"./temp.json\", features=datasets.Features({\r\n 'ref1': datasets.Value('string'),\r\n 'ref2': datasets.Value('string'),\r\n 'cuts': [{\r\n \"cut1\": datasets.Value(\"uint16\"),\r\n \"cut2\": datasets.Value(\"uint16\")\r\n }]\r\n}))\r\n```",
"> Hi ! `Sequence` has a weird behavior for dictionaries (from tensorflow-datasets), use a regular list instead:\r\n> \r\n> ```python\r\n> ds = datasets.load_dataset('json', data_files=\"./temp.json\", features=datasets.Features({\r\n> 'ref1': datasets.Value('string'),\r\n> 'ref2': datasets.Value('string'),\r\n> 'cuts': [{\r\n> \"cut1\": datasets.Value(\"uint16\"),\r\n> \"cut2\": datasets.Value(\"uint16\")\r\n> }]\r\n> }))\r\n> ```\r\nThank you!\r\n",
"It works."
] | 2024-08-20T12:27:49Z
| 2024-09-03T10:18:23Z
| 2024-09-03T10:18:07Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I have a json named temp.json.
```json
{"ref1": "ABC", "ref2": "DEF", "cuts":[{"cut1": 3, "cut2": 5}]}
```
I want to load it.
```python
ds = datasets.load_dataset('json', data_files="./temp.json", features=datasets.Features({
'ref1': datasets.Value('string'),
'ref2': datasets.Value('string'),
'cuts': datasets.Sequence({
"cut1": datasets.Value("uint16"),
"cut2": datasets.Value("uint16")
})
}))
```
The above code does not work. However, I can load it without giving features.
```python
ds = datasets.load_dataset('json', data_files="./temp.json")
```
Is it possible to load integers as uint16 to save some memory?
### Steps to reproduce the bug
As in the bug description.
### Expected behavior
The data are loaded and integers are uint16.
### Environment info
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.21.0
- Platform: Linux-5.15.0-118-generic-x86_64-with-glibc2.35
- Python version: 3.11.9
- `huggingface_hub` version: 0.24.5
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/38550511?v=4",
"events_url": "https://api.github.com/users/ljw20180420/events{/privacy}",
"followers_url": "https://api.github.com/users/ljw20180420/followers",
"following_url": "https://api.github.com/users/ljw20180420/following{/other_user}",
"gists_url": "https://api.github.com/users/ljw20180420/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ljw20180420",
"id": 38550511,
"login": "ljw20180420",
"node_id": "MDQ6VXNlcjM4NTUwNTEx",
"organizations_url": "https://api.github.com/users/ljw20180420/orgs",
"received_events_url": "https://api.github.com/users/ljw20180420/received_events",
"repos_url": "https://api.github.com/users/ljw20180420/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ljw20180420/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ljw20180420/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ljw20180420",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7116/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7116/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/7500
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7500/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7500/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7500/events
|
https://github.com/huggingface/datasets/issues/7500
| 2,974,841,921
|
I_kwDODunzps6xUHxB
| 7,500
|
Make `with_format` correctly indicate that a `Dataset` is compatible with PyTorch's `Dataset` class
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3817460?v=4",
"events_url": "https://api.github.com/users/benglewis/events{/privacy}",
"followers_url": "https://api.github.com/users/benglewis/followers",
"following_url": "https://api.github.com/users/benglewis/following{/other_user}",
"gists_url": "https://api.github.com/users/benglewis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/benglewis",
"id": 3817460,
"login": "benglewis",
"node_id": "MDQ6VXNlcjM4MTc0NjA=",
"organizations_url": "https://api.github.com/users/benglewis/orgs",
"received_events_url": "https://api.github.com/users/benglewis/received_events",
"repos_url": "https://api.github.com/users/benglewis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/benglewis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/benglewis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/benglewis",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[
"Does the torch `DataLoader` really require the dataset to be a subclass of `torch.utils.data.Dataset` ? Or is there a simpler type we could use ?\n\nPS: also note that a dataset without `with_format()` can also be used in a torch `DataLoader` . Calling `with_format(\"torch\")` simply makes the output of the dataset torch Tensors in an efficient way."
] | 2025-04-06T09:56:09Z
| 2025-04-15T12:57:39Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
Currently `datasets` does not correctly indicate to the Python type-checker (e.g. `pyright` / `Pylance`) that the output of `with_format` is compatible with PyTorch's `Dataloader` since it does not indicate that the HuggingFace `Dataset` is compatible with the PyTorch `Dataset` class. It would be great if we could get the typing to work nicely.
### Motivation
To avoid casting types in our Python code.
### Your contribution
I would be happy to contribute a PR if this is something that may be accepted and could work with the current approach.
This doesn't have to be for just PyTorch, but I imagine that the same thing would be useful for `tensorflow` and such, but we only have a need for PyTorch at this stage.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7500/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7500/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5248
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5248/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5248/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5248/events
|
https://github.com/huggingface/datasets/pull/5248
| 1,451,338,676
|
PR_kwDODunzps5DAqwt
| 5,248
|
Complete doc migration
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mishig25",
"id": 11827707,
"login": "mishig25",
"node_id": "MDQ6VXNlcjExODI3NzA3",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"repos_url": "https://api.github.com/users/mishig25/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mishig25",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5248). All of your documentation changes will be reflected on that endpoint.",
"Thanks for the fix @mishig25.\r\n\r\nI guess this is the reason why the docs are not generated for the latest release version 2.7.0? https://huggingface.co/docs/datasets/index "
] | 2022-11-16T10:41:04Z
| 2022-11-16T15:06:50Z
| 2022-11-16T10:41:10Z
|
NONE
| null | null | null |
Reverts huggingface/datasets#5214
Everything is handled on the doc-builder side now 😊
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mishig25",
"id": 11827707,
"login": "mishig25",
"node_id": "MDQ6VXNlcjExODI3NzA3",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"repos_url": "https://api.github.com/users/mishig25/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mishig25",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5248/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5248/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5248.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5248",
"merged_at": "2022-11-16T10:41:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5248.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5248"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5342
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5342/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5342/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5342/events
|
https://github.com/huggingface/datasets/issues/5342
| 1,485,244,178
|
I_kwDODunzps5YhwcS
| 5,342
|
Emotion dataset cannot be downloaded
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/78887193?v=4",
"events_url": "https://api.github.com/users/cbarond/events{/privacy}",
"followers_url": "https://api.github.com/users/cbarond/followers",
"following_url": "https://api.github.com/users/cbarond/following{/other_user}",
"gists_url": "https://api.github.com/users/cbarond/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cbarond",
"id": 78887193,
"login": "cbarond",
"node_id": "MDQ6VXNlcjc4ODg3MTkz",
"organizations_url": "https://api.github.com/users/cbarond/orgs",
"received_events_url": "https://api.github.com/users/cbarond/received_events",
"repos_url": "https://api.github.com/users/cbarond/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cbarond/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cbarond/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cbarond",
"user_view_type": "public"
}
|
[
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
}
] |
closed
| false
| null |
[] | null |
[
"Hi @cbarond there's already an open issue at https://github.com/dair-ai/emotion_dataset/issues/5, as the data seems to be missing now, so check that issue instead 👍🏻 ",
"Thanks @cbarond for reporting and @alvarobartt for pointing to the issue we opened in the author's repo.\r\n\r\nIndeed, this issue was first raised in the \"emotion\" dataset Community tab: https://huggingface.co/datasets/emotion/discussions/3\r\n\r\nI'm closing this issue and leave the issue above for the subsequent updates.\r\n\r\nDuplicate of: https://huggingface.co/datasets/emotion/discussions/3",
"try using \"SetFit/emotion\" instead",
"> try using \"SetFit/emotion\" instead\r\n\r\nI' replaced \"emotion\" with \"SetFit/Emotion\", but the code is getting stuck at\r\n\r\n`emotions = load_dataset(\"SetFit/emotion\")`\r\n\r\nI pause execution using the debugger, and it takes me to filelock.py:226\r\n\r\n`with self._thread_lock:`\r\n\r\nDo you know a way to get past this issue?",
"thanks @honeyimholm - worked for me",
"> try using \"SetFit/emotion\" instead\r\n\r\nIt really helps a lot, thank you!",
"The dataset loading script has been fixed: https://huggingface.co/datasets/emotion/discussions/4"
] | 2022-12-08T19:07:09Z
| 2023-02-23T19:13:19Z
| 2022-12-09T10:46:11Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
The emotion dataset gives a FileNotFoundError. The full error is: `FileNotFoundError: Couldn't find file at https://www.dropbox.com/s/1pzkadrvffbqw6o/train.txt?dl=1`.
It was working yesterday (December 7, 2022), but stopped working today (December 8, 2022).
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("emotion")
```
### Expected behavior
The dataset should load properly.
### Environment info
- `datasets` version: 2.7.1
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.9.13
- PyArrow version: 10.0.1
- Pandas version: 1.5.1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5342/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5342/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/4631
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4631/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4631/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4631/events
|
https://github.com/huggingface/datasets/pull/4631
| 1,293,545,900
|
PR_kwDODunzps460Vy0
| 4,631
|
Update WinoBias README
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sashavor",
"id": 14205986,
"login": "sashavor",
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"repos_url": "https://api.github.com/users/sashavor/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sashavor",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-07-04T20:24:40Z
| 2022-07-07T13:23:32Z
| 2022-07-07T13:11:47Z
|
CONTRIBUTOR
| null | null | null |
I'm adding some information about Winobias that I got from the paper :smile:
I think this makes it a bit clearer!
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4631/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4631/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4631.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4631",
"merged_at": "2022-07-07T13:11:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4631.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4631"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6064
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6064/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6064/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6064/events
|
https://github.com/huggingface/datasets/pull/6064
| 1,818,703,725
|
PR_kwDODunzps5WPzAv
| 6,064
|
set dev version
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6064). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006704 / 0.011353 (-0.004649) | 0.004208 / 0.011008 (-0.006800) | 0.085895 / 0.038508 (0.047387) | 0.079303 / 0.023109 (0.056193) | 0.353430 / 0.275898 (0.077532) | 0.390814 / 0.323480 (0.067334) | 0.006565 / 0.007986 (-0.001420) | 0.003588 / 0.004328 (-0.000740) | 0.065249 / 0.004250 (0.060999) | 0.059772 / 0.037052 (0.022720) | 0.356315 / 0.258489 (0.097826) | 0.404812 / 0.293841 (0.110971) | 0.031127 / 0.128546 (-0.097419) | 0.008656 / 0.075646 (-0.066991) | 0.288734 / 0.419271 (-0.130537) | 0.053157 / 0.043533 (0.009625) | 0.354651 / 0.255139 (0.099512) | 0.370590 / 0.283200 (0.087391) | 0.024944 / 0.141683 (-0.116738) | 1.472393 / 1.452155 (0.020238) | 1.548946 / 1.492716 (0.056229) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223430 / 0.018006 (0.205424) | 0.567359 / 0.000490 (0.566870) | 0.006744 / 0.000200 (0.006544) | 0.000094 / 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030174 / 0.037411 (-0.007237) | 0.084865 / 0.014526 (0.070339) | 0.098986 / 0.176557 (-0.077571) | 0.161458 / 0.737135 (-0.575678) | 0.099198 / 0.296338 (-0.197141) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.404324 / 0.215209 (0.189115) | 4.043744 / 2.077655 (1.966090) | 1.972834 / 1.504120 (0.468714) | 1.801634 / 1.541195 (0.260439) | 1.891198 / 1.468490 (0.422708) | 0.488511 / 4.584777 (-4.096266) | 3.566890 / 3.745712 (-0.178823) | 3.369415 / 5.269862 (-1.900447) | 2.054995 / 4.565676 (-2.510682) | 0.057225 / 0.424275 (-0.367050) | 0.007360 / 0.007607 (-0.000247) | 0.471813 / 0.226044 (0.245769) | 4.734397 / 2.268929 (2.465468) | 2.526585 / 55.444624 (-52.918039) | 2.230535 / 6.876477 (-4.645942) | 2.434403 / 2.142072 (0.292330) | 0.630090 / 4.805227 (-4.175137) | 0.138544 / 6.500664 (-6.362120) | 0.060099 / 0.075469 (-0.015370) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.260951 / 1.841788 (-0.580837) | 20.051513 / 8.074308 (11.977204) | 14.675938 / 10.191392 (4.484546) | 0.169535 / 0.680424 (-0.510889) | 0.018574 / 0.534201 (-0.515627) | 0.394255 / 0.579283 (-0.185028) | 0.412713 / 0.434364 (-0.021651) | 0.475891 / 0.540337 (-0.064446) | 0.658223 / 1.386936 (-0.728713) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006969 / 0.011353 (-0.004384) | 0.004417 / 0.011008 (-0.006591) | 0.064399 / 0.038508 (0.025891) | 0.082928 / 0.023109 (0.059819) | 0.402285 / 0.275898 (0.126387) | 0.440032 / 0.323480 (0.116552) | 0.005896 / 0.007986 (-0.002090) | 0.003580 / 0.004328 (-0.000749) | 0.065340 / 0.004250 (0.061090) | 0.060363 / 0.037052 (0.023311) | 0.417413 / 0.258489 (0.158924) | 0.448527 / 0.293841 (0.154686) | 0.032238 / 0.128546 (-0.096308) | 0.008820 / 0.075646 (-0.066826) | 0.071516 / 0.419271 (-0.347755) | 0.050614 / 0.043533 (0.007081) | 0.406565 / 0.255139 (0.151426) | 0.422527 / 0.283200 (0.139328) | 0.025866 / 0.141683 (-0.115817) | 1.512256 / 1.452155 (0.060101) | 1.568433 / 1.492716 (0.075717) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.266521 / 0.018006 (0.248515) | 0.564524 / 0.000490 (0.564034) | 0.005236 / 0.000200 (0.005036) | 0.000085 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031998 / 0.037411 (-0.005413) | 0.090754 / 0.014526 (0.076229) | 0.105954 / 0.176557 (-0.070602) | 0.164506 / 0.737135 (-0.572629) | 0.108792 / 0.296338 (-0.187546) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422044 / 0.215209 (0.206835) | 4.204449 / 2.077655 (2.126795) | 2.232060 / 1.504120 (0.727940) | 2.060389 / 1.541195 (0.519194) | 2.152723 / 1.468490 (0.684233) | 0.488456 / 4.584777 (-4.096321) | 3.591102 / 3.745712 (-0.154611) | 5.250401 / 5.269862 (-0.019461) | 3.060259 / 4.565676 (-1.505417) | 0.057558 / 0.424275 (-0.366717) | 0.007881 / 0.007607 (0.000274) | 0.508631 / 0.226044 (0.282587) | 5.064857 / 2.268929 (2.795928) | 2.719068 / 55.444624 (-52.725556) | 2.389992 / 6.876477 (-4.486485) | 2.595073 / 2.142072 (0.453000) | 0.590179 / 4.805227 (-4.215048) | 0.136149 / 6.500664 (-6.364515) | 0.062546 / 0.075469 (-0.012923) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.369252 / 1.841788 (-0.472535) | 20.637580 / 8.074308 (12.563272) | 14.217129 / 10.191392 (4.025737) | 0.195464 / 0.680424 (-0.484960) | 0.018452 / 0.534201 (-0.515749) | 0.397044 / 0.579283 (-0.182239) | 0.401127 / 0.434364 (-0.033237) | 0.465033 / 0.540337 (-0.075305) | 0.613484 / 1.386936 (-0.773452) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006793 / 0.011353 (-0.004559) | 0.004374 / 0.011008 (-0.006635) | 0.084958 / 0.038508 (0.046450) | 0.080440 / 0.023109 (0.057331) | 0.317951 / 0.275898 (0.042053) | 0.376133 / 0.323480 (0.052653) | 0.005775 / 0.007986 (-0.002211) | 0.003644 / 0.004328 (-0.000684) | 0.064823 / 0.004250 (0.060573) | 0.059442 / 0.037052 (0.022390) | 0.319636 / 0.258489 (0.061147) | 0.389668 / 0.293841 (0.095827) | 0.031181 / 0.128546 (-0.097365) | 0.008725 / 0.075646 (-0.066921) | 0.288514 / 0.419271 (-0.130757) | 0.053466 / 0.043533 (0.009933) | 0.323131 / 0.255139 (0.067992) | 0.345276 / 0.283200 (0.062076) | 0.025046 / 0.141683 (-0.116637) | 1.491659 / 1.452155 (0.039504) | 1.562105 / 1.492716 (0.069389) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.286325 / 0.018006 (0.268319) | 0.578021 / 0.000490 (0.577531) | 0.007240 / 0.000200 (0.007040) | 0.000095 / 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030163 / 0.037411 (-0.007248) | 0.082100 / 0.014526 (0.067574) | 0.098331 / 0.176557 (-0.078225) | 0.160517 / 0.737135 (-0.576618) | 0.098479 / 0.296338 (-0.197859) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.401782 / 0.215209 (0.186573) | 4.006330 / 2.077655 (1.928675) | 2.033841 / 1.504120 (0.529721) | 1.853248 / 1.541195 (0.312053) | 1.980046 / 1.468490 (0.511556) | 0.480636 / 4.584777 (-4.104141) | 3.684482 / 3.745712 (-0.061231) | 5.601940 / 5.269862 (0.332079) | 3.369683 / 4.565676 (-1.195993) | 0.057105 / 0.424275 (-0.367170) | 0.007462 / 0.007607 (-0.000145) | 0.474860 / 0.226044 (0.248815) | 4.749624 / 2.268929 (2.480695) | 2.492084 / 55.444624 (-52.952540) | 2.157985 / 6.876477 (-4.718491) | 2.420997 / 2.142072 (0.278925) | 0.574718 / 4.805227 (-4.230509) | 0.134672 / 6.500664 (-6.365992) | 0.061677 / 0.075469 (-0.013792) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.284151 / 1.841788 (-0.557637) | 20.186823 / 8.074308 (12.112515) | 14.247024 / 10.191392 (4.055632) | 0.171606 / 0.680424 (-0.508818) | 0.018619 / 0.534201 (-0.515582) | 0.394156 / 0.579283 (-0.185127) | 0.424684 / 0.434364 (-0.009679) | 0.476056 / 0.540337 (-0.064281) | 0.668751 / 1.386936 (-0.718185) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006807 / 0.011353 (-0.004546) | 0.004142 / 0.011008 (-0.006867) | 0.065503 / 0.038508 (0.026995) | 0.083232 / 0.023109 (0.060122) | 0.378278 / 0.275898 (0.102380) | 0.410191 / 0.323480 (0.086711) | 0.005660 / 0.007986 (-0.002326) | 0.003486 / 0.004328 (-0.000842) | 0.066109 / 0.004250 (0.061859) | 0.059654 / 0.037052 (0.022601) | 0.375965 / 0.258489 (0.117476) | 0.420046 / 0.293841 (0.126205) | 0.031587 / 0.128546 (-0.096959) | 0.008693 / 0.075646 (-0.066953) | 0.071121 / 0.419271 (-0.348151) | 0.049468 / 0.043533 (0.005935) | 0.373785 / 0.255139 (0.118646) | 0.395577 / 0.283200 (0.112377) | 0.024138 / 0.141683 (-0.117545) | 1.465451 / 1.452155 (0.013297) | 1.547565 / 1.492716 (0.054849) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.325241 / 0.018006 (0.307234) | 0.532415 / 0.000490 (0.531925) | 0.004755 / 0.000200 (0.004555) | 0.000084 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033472 / 0.037411 (-0.003939) | 0.090574 / 0.014526 (0.076048) | 0.106712 / 0.176557 (-0.069845) | 0.164353 / 0.737135 (-0.572783) | 0.109344 / 0.296338 (-0.186994) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420161 / 0.215209 (0.204952) | 4.192334 / 2.077655 (2.114679) | 2.178181 / 1.504120 (0.674061) | 2.017405 / 1.541195 (0.476211) | 2.182783 / 1.468490 (0.714293) | 0.484037 / 4.584777 (-4.100740) | 3.641911 / 3.745712 (-0.103801) | 5.543874 / 5.269862 (0.274013) | 3.440084 / 4.565676 (-1.125593) | 0.056662 / 0.424275 (-0.367614) | 0.007773 / 0.007607 (0.000166) | 0.498357 / 0.226044 (0.272313) | 4.951315 / 2.268929 (2.682386) | 2.656732 / 55.444624 (-52.787892) | 2.370566 / 6.876477 (-4.505910) | 2.682289 / 2.142072 (0.540217) | 0.598479 / 4.805227 (-4.206749) | 0.151546 / 6.500664 (-6.349118) | 0.063278 / 0.075469 (-0.012191) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.385897 / 1.841788 (-0.455891) | 20.961851 / 8.074308 (12.887543) | 14.465688 / 10.191392 (4.274296) | 0.166156 / 0.680424 (-0.514268) | 0.018848 / 0.534201 (-0.515353) | 0.401712 / 0.579283 (-0.177571) | 0.416674 / 0.434364 (-0.017690) | 0.471834 / 0.540337 (-0.068503) | 0.622463 / 1.386936 (-0.764473) |\n\n</details>\n</details>\n\n\n"
] | 2023-07-24T15:56:00Z
| 2023-07-24T16:05:19Z
| 2023-07-24T15:56:10Z
|
MEMBER
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6064/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6064/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6064.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6064",
"merged_at": "2023-07-24T15:56:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6064.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6064"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6536
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6536/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6536/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6536/events
|
https://github.com/huggingface/datasets/issues/6536
| 2,056,863,239
|
I_kwDODunzps56mT4H
| 6,536
|
datasets.load_dataset raises FileNotFoundError for datasets==2.16.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/46237844?v=4",
"events_url": "https://api.github.com/users/ArvinZhuang/events{/privacy}",
"followers_url": "https://api.github.com/users/ArvinZhuang/followers",
"following_url": "https://api.github.com/users/ArvinZhuang/following{/other_user}",
"gists_url": "https://api.github.com/users/ArvinZhuang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ArvinZhuang",
"id": 46237844,
"login": "ArvinZhuang",
"node_id": "MDQ6VXNlcjQ2MjM3ODQ0",
"organizations_url": "https://api.github.com/users/ArvinZhuang/orgs",
"received_events_url": "https://api.github.com/users/ArvinZhuang/received_events",
"repos_url": "https://api.github.com/users/ArvinZhuang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ArvinZhuang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArvinZhuang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ArvinZhuang",
"user_view_type": "public"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
] | null |
[
"Hi ! Thanks for reporting\r\n\r\nThis is a bug in 2.16.0 for some datasets when `cache_dir` is a relative path. I opened https://github.com/huggingface/datasets/pull/6543 to fix this",
"We just released 2.16.1 with a fix:\r\n\r\n```\r\npip install -U datasets\r\n```"
] | 2023-12-27T03:15:48Z
| 2023-12-30T18:58:04Z
| 2023-12-30T15:54:00Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Seems `datasets.load_dataset` raises FileNotFoundError for some hub datasets with the latest `datasets==2.16.0`
### Steps to reproduce the bug
For example `pip install datasets==2.16.0`
then
```python
import datasets
datasets.load_dataset("wentingzhao/anthropic-hh-first-prompt", cache_dir='cache1')["train"]
```
This will raise:
```bash
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/load.py", line 2545, in load_dataset
builder_instance.download_and_prepare(
File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/builder.py", line 1003, in download_and_prepare
self._download_and_prepare(
File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/builder.py", line 1076, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 43, in _split_generators
data_files = dl_manager.download_and_extract(self.config.data_files)
File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/download/download_manager.py", line 566, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/download/download_manager.py", line 539, in extract
extracted_paths = map_nested(
File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 466, in map_nested
mapped = [
File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 467, in <listcomp>
_single_map_nested((function, obj, types, None, True, None))
File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 387, in _single_map_nested
mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]
File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 387, in <listcomp>
mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]
File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 370, in _single_map_nested
return function(data_struct)
File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/download/download_manager.py", line 451, in _download
out = cached_path(url_or_filename, download_config=download_config)
File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 188, in cached_path
output_path = get_from_cache(
File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 570, in get_from_cache
raise FileNotFoundError(f"Couldn't find file at {url}")
FileNotFoundError: Couldn't find file at https://huggingface.co/datasets/wentingzhao/anthropic-hh-first-prompt/resolve/11b393a5545f706a357ebcd4a5285d93db176715/cache1/downloads/87d66c365626feca116cba323c4856c9aae056e4503f09f23e34aa085eb9de15
```
However, seems it works fine for some datasets, for example, if works fine for `datasets.load_dataset("ag_news", cache_dir='cache2')["test"]`
But the dataset works fine for datasets==2.15.0, for example `pip install datasets==2.15.0`,
then
```python
import datasets
datasets.load_dataset("wentingzhao/anthropic-hh-first-prompt", cache_dir='cache3')["train"]
Dataset({
features: ['user', 'system', 'source'],
num_rows: 8552
})
```
### Expected behavior
2.16.0 should work as same as 2.15.0 for all datasets
### Environment info
python3.9
conda env
tested on MacOS and Linux
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6536/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6536/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5377
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5377/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5377/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5377/events
|
https://github.com/huggingface/datasets/pull/5377
| 1,503,477,833
|
PR_kwDODunzps5Fz5lw
| 5,377
|
Add a parallel implementation of to_tf_dataset()
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Rocketknight1",
"id": 12866554,
"login": "Rocketknight1",
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Rocketknight1",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Failing because the test server uses Py3.7 but the `SharedMemory` features require Py3.8! I forgot we still support 3.7 for another couple of months. I'm not sure exactly how to proceed, whether I should leave this PR until then, or just gate the feature behind a version check and skip the tests until the Python version catches up.",
"I haven't played with `NumpyMultiprocessingGenerator` so I can't really help here, but this sounds promising :) Otherwise I think it's also fine to allow `num_workers` only for py>=3.8 for now. You can skip the test on 3.7 and make sure to raise an informative error if someone wants to use `num_workers` with 3.7",
"Lots of comments here - I'll reply to the specific code comments underneath them, but in response to the general comments:\r\n\r\n@gante: I think this approach is much more performant than a `multiprocessing.Pool`. The reason is that when results are returned from a process `Pool`, the returned Python objects are pickled by the child processes, sent down a pipe and unpickled by the parent process. This creates a huge single-process bottleneck as the parent has to unpickle lots of large NumPy arrays, which is quite slow.\r\n\r\nWhen you use a `SharedMemory` approach, the data is just **there** for the parent process - the child and the parent are writing to exactly the same array in memory, and no pickling or unpickling occurs. This means the parent can just immediately copy the array (which is much faster than unpickling) and yield it to `tf.data`. We're taking advantage of the fact that we know the data is just big NumPy arrays and we don't need the full generality of `pickle`.\r\n\r\n@lhoestq: Sounds good! I'll add a clear error and skip the tests on Py<=3.7.",
"Also, an extra technicality, just for information in case anyone looks at this PR later: Recent versions of Python allow [pickled objects to store out-of-band data](https://peps.python.org/pep-0574/). This allows for very efficient zero-copy unpickling of objects like NumPy arrays, with the unpickled object having a view on the same memory as the original. \r\n\r\nHowever, this explicitly does **not** work when the object is unpickled by a different process than the one that created it. For this to work you must explicitly allocate shared memory and create the array there, which pickle cannot handle for you. As a result, if you just benchmark unpickling vs copying of NumPy arrays it can seem like unpickling is very fast - but this is only true when the pickle was created in the unpickling process!",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008666 / 0.011353 (-0.002687) | 0.004624 / 0.011008 (-0.006384) | 0.099247 / 0.038508 (0.060739) | 0.029766 / 0.023109 (0.006657) | 0.303347 / 0.275898 (0.027449) | 0.370022 / 0.323480 (0.046542) | 0.007128 / 0.007986 (-0.000857) | 0.003446 / 0.004328 (-0.000883) | 0.076670 / 0.004250 (0.072420) | 0.038892 / 0.037052 (0.001840) | 0.313035 / 0.258489 (0.054546) | 0.350503 / 0.293841 (0.056662) | 0.033732 / 0.128546 (-0.094815) | 0.011644 / 0.075646 (-0.064003) | 0.323295 / 0.419271 (-0.095977) | 0.040336 / 0.043533 (-0.003196) | 0.302253 / 0.255139 (0.047114) | 0.337199 / 0.283200 (0.053999) | 0.089454 / 0.141683 (-0.052229) | 1.624906 / 1.452155 (0.172752) | 1.546187 / 1.492716 (0.053470) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.184614 / 0.018006 (0.166608) | 0.427397 / 0.000490 (0.426907) | 0.003342 / 0.000200 (0.003142) | 0.000079 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023684 / 0.037411 (-0.013727) | 0.100095 / 0.014526 (0.085569) | 0.104996 / 0.176557 (-0.071560) | 0.144719 / 0.737135 (-0.592416) | 0.110759 / 0.296338 (-0.185579) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.421108 / 0.215209 (0.205899) | 4.214094 / 2.077655 (2.136440) | 1.906231 / 1.504120 (0.402111) | 1.698000 / 1.541195 (0.156806) | 1.744856 / 1.468490 (0.276366) | 0.693671 / 4.584777 (-3.891106) | 3.362522 / 3.745712 (-0.383190) | 1.878470 / 5.269862 (-3.391392) | 1.167563 / 4.565676 (-3.398113) | 0.082455 / 0.424275 (-0.341820) | 0.012261 / 0.007607 (0.004654) | 0.525196 / 0.226044 (0.299152) | 5.257553 / 2.268929 (2.988624) | 2.298286 / 55.444624 (-53.146339) | 1.956106 / 6.876477 (-4.920371) | 2.006308 / 2.142072 (-0.135764) | 0.811069 / 4.805227 (-3.994158) | 0.150368 / 6.500664 (-6.350296) | 0.065699 / 0.075469 (-0.009771) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.224516 / 1.841788 (-0.617272) | 13.619084 / 8.074308 (5.544776) | 14.096666 / 10.191392 (3.905274) | 0.151068 / 0.680424 (-0.529356) | 0.028819 / 0.534201 (-0.505382) | 0.402071 / 0.579283 (-0.177212) | 0.408647 / 0.434364 (-0.025717) | 0.466605 / 0.540337 (-0.073733) | 0.547094 / 1.386936 (-0.839842) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006935 / 0.011353 (-0.004418) | 0.004590 / 0.011008 (-0.006419) | 0.099398 / 0.038508 (0.060890) | 0.028145 / 0.023109 (0.005036) | 0.426582 / 0.275898 (0.150684) | 0.465712 / 0.323480 (0.142233) | 0.005254 / 0.007986 (-0.002731) | 0.004956 / 0.004328 (0.000627) | 0.075616 / 0.004250 (0.071365) | 0.039871 / 0.037052 (0.002819) | 0.428859 / 0.258489 (0.170370) | 0.470839 / 0.293841 (0.176998) | 0.032150 / 0.128546 (-0.096396) | 0.011778 / 0.075646 (-0.063868) | 0.322358 / 0.419271 (-0.096913) | 0.041974 / 0.043533 (-0.001559) | 0.427459 / 0.255139 (0.172320) | 0.446685 / 0.283200 (0.163485) | 0.092000 / 0.141683 (-0.049683) | 1.509231 / 1.452155 (0.057076) | 1.578950 / 1.492716 (0.086234) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.168047 / 0.018006 (0.150041) | 0.418993 / 0.000490 (0.418503) | 0.002855 / 0.000200 (0.002655) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025652 / 0.037411 (-0.011759) | 0.100141 / 0.014526 (0.085616) | 0.107293 / 0.176557 (-0.069264) | 0.142857 / 0.737135 (-0.594278) | 0.110933 / 0.296338 (-0.185406) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.477556 / 0.215209 (0.262347) | 4.777951 / 2.077655 (2.700296) | 2.461885 / 1.504120 (0.957765) | 2.252307 / 1.541195 (0.711112) | 2.307983 / 1.468490 (0.839493) | 0.697570 / 4.584777 (-3.887207) | 3.370323 / 3.745712 (-0.375389) | 3.131333 / 5.269862 (-2.138529) | 1.594839 / 4.565676 (-2.970838) | 0.082333 / 0.424275 (-0.341942) | 0.012574 / 0.007607 (0.004967) | 0.583704 / 0.226044 (0.357660) | 5.817675 / 2.268929 (3.548746) | 2.927054 / 55.444624 (-52.517570) | 2.582929 / 6.876477 (-4.293548) | 2.634275 / 2.142072 (0.492202) | 0.806407 / 4.805227 (-3.998821) | 0.151438 / 6.500664 (-6.349226) | 0.067429 / 0.075469 (-0.008040) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.267011 / 1.841788 (-0.574776) | 13.989515 / 8.074308 (5.915207) | 14.087968 / 10.191392 (3.896576) | 0.142130 / 0.680424 (-0.538293) | 0.017201 / 0.534201 (-0.517000) | 0.383394 / 0.579283 (-0.195889) | 0.381921 / 0.434364 (-0.052443) | 0.439169 / 0.540337 (-0.101168) | 0.524215 / 1.386936 (-0.862721) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008489 / 0.011353 (-0.002864) | 0.004617 / 0.011008 (-0.006391) | 0.102035 / 0.038508 (0.063527) | 0.029850 / 0.023109 (0.006741) | 0.296789 / 0.275898 (0.020891) | 0.367270 / 0.323480 (0.043790) | 0.006934 / 0.007986 (-0.001052) | 0.004923 / 0.004328 (0.000595) | 0.079150 / 0.004250 (0.074900) | 0.036884 / 0.037052 (-0.000169) | 0.305747 / 0.258489 (0.047258) | 0.348510 / 0.293841 (0.054669) | 0.034074 / 0.128546 (-0.094472) | 0.011650 / 0.075646 (-0.063997) | 0.324226 / 0.419271 (-0.095045) | 0.041763 / 0.043533 (-0.001770) | 0.300887 / 0.255139 (0.045748) | 0.333393 / 0.283200 (0.050193) | 0.093838 / 0.141683 (-0.047844) | 1.499801 / 1.452155 (0.047646) | 1.505988 / 1.492716 (0.013272) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.198610 / 0.018006 (0.180604) | 0.407380 / 0.000490 (0.406891) | 0.000367 / 0.000200 (0.000167) | 0.000059 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022858 / 0.037411 (-0.014554) | 0.095727 / 0.014526 (0.081202) | 0.104014 / 0.176557 (-0.072543) | 0.138764 / 0.737135 (-0.598371) | 0.105860 / 0.296338 (-0.190478) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.416352 / 0.215209 (0.201143) | 4.150007 / 2.077655 (2.072352) | 1.878727 / 1.504120 (0.374607) | 1.678978 / 1.541195 (0.137783) | 1.711990 / 1.468490 (0.243500) | 0.691722 / 4.584777 (-3.893055) | 3.386466 / 3.745712 (-0.359246) | 1.835730 / 5.269862 (-3.434132) | 1.149975 / 4.565676 (-3.415702) | 0.081914 / 0.424275 (-0.342362) | 0.012238 / 0.007607 (0.004631) | 0.522945 / 0.226044 (0.296900) | 5.251793 / 2.268929 (2.982864) | 2.306907 / 55.444624 (-53.137717) | 1.968400 / 6.876477 (-4.908076) | 1.981154 / 2.142072 (-0.160919) | 0.810126 / 4.805227 (-3.995101) | 0.147876 / 6.500664 (-6.352788) | 0.064042 / 0.075469 (-0.011428) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.199150 / 1.841788 (-0.642637) | 13.913473 / 8.074308 (5.839165) | 14.079132 / 10.191392 (3.887740) | 0.137387 / 0.680424 (-0.543037) | 0.028456 / 0.534201 (-0.505745) | 0.394162 / 0.579283 (-0.185122) | 0.402051 / 0.434364 (-0.032313) | 0.461944 / 0.540337 (-0.078394) | 0.542648 / 1.386936 (-0.844288) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006393 / 0.011353 (-0.004960) | 0.004599 / 0.011008 (-0.006409) | 0.097389 / 0.038508 (0.058881) | 0.027719 / 0.023109 (0.004610) | 0.341060 / 0.275898 (0.065162) | 0.379604 / 0.323480 (0.056124) | 0.004955 / 0.007986 (-0.003030) | 0.003369 / 0.004328 (-0.000959) | 0.075390 / 0.004250 (0.071139) | 0.038518 / 0.037052 (0.001466) | 0.347085 / 0.258489 (0.088596) | 0.393468 / 0.293841 (0.099627) | 0.031482 / 0.128546 (-0.097064) | 0.011585 / 0.075646 (-0.064061) | 0.317969 / 0.419271 (-0.101302) | 0.041389 / 0.043533 (-0.002144) | 0.343812 / 0.255139 (0.088673) | 0.371047 / 0.283200 (0.087848) | 0.090020 / 0.141683 (-0.051663) | 1.461690 / 1.452155 (0.009536) | 1.552458 / 1.492716 (0.059741) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.188691 / 0.018006 (0.170684) | 0.415635 / 0.000490 (0.415145) | 0.005285 / 0.000200 (0.005085) | 0.000087 / 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024695 / 0.037411 (-0.012716) | 0.098939 / 0.014526 (0.084413) | 0.108472 / 0.176557 (-0.068085) | 0.152635 / 0.737135 (-0.584501) | 0.109947 / 0.296338 (-0.186391) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.471975 / 0.215209 (0.256766) | 4.716437 / 2.077655 (2.638782) | 2.420148 / 1.504120 (0.916028) | 2.219864 / 1.541195 (0.678669) | 2.238647 / 1.468490 (0.770157) | 0.697628 / 4.584777 (-3.887149) | 3.530720 / 3.745712 (-0.214993) | 3.327354 / 5.269862 (-1.942508) | 1.665877 / 4.565676 (-2.899800) | 0.082650 / 0.424275 (-0.341625) | 0.012593 / 0.007607 (0.004986) | 0.576109 / 0.226044 (0.350065) | 5.744691 / 2.268929 (3.475762) | 2.863473 / 55.444624 (-52.581152) | 2.529616 / 6.876477 (-4.346861) | 2.562802 / 2.142072 (0.420730) | 0.805631 / 4.805227 (-3.999597) | 0.150788 / 6.500664 (-6.349876) | 0.065743 / 0.075469 (-0.009726) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.295134 / 1.841788 (-0.546654) | 14.096046 / 8.074308 (6.021738) | 13.901399 / 10.191392 (3.710007) | 0.127481 / 0.680424 (-0.552943) | 0.016666 / 0.534201 (-0.517535) | 0.381819 / 0.579283 (-0.197464) | 0.382629 / 0.434364 (-0.051735) | 0.439354 / 0.540337 (-0.100984) | 0.527662 / 1.386936 (-0.859274) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008509 / 0.011353 (-0.002844) | 0.004523 / 0.011008 (-0.006485) | 0.100616 / 0.038508 (0.062108) | 0.029573 / 0.023109 (0.006464) | 0.306414 / 0.275898 (0.030516) | 0.377034 / 0.323480 (0.053554) | 0.007621 / 0.007986 (-0.000365) | 0.003335 / 0.004328 (-0.000993) | 0.078598 / 0.004250 (0.074348) | 0.036902 / 0.037052 (-0.000150) | 0.318146 / 0.258489 (0.059657) | 0.355626 / 0.293841 (0.061785) | 0.033441 / 0.128546 (-0.095105) | 0.011552 / 0.075646 (-0.064094) | 0.322973 / 0.419271 (-0.096299) | 0.040564 / 0.043533 (-0.002968) | 0.306451 / 0.255139 (0.051312) | 0.337591 / 0.283200 (0.054392) | 0.086822 / 0.141683 (-0.054861) | 1.484601 / 1.452155 (0.032447) | 1.542777 / 1.492716 (0.050061) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.201711 / 0.018006 (0.183705) | 0.418387 / 0.000490 (0.417898) | 0.002753 / 0.000200 (0.002553) | 0.000263 / 0.000054 (0.000209) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023016 / 0.037411 (-0.014395) | 0.097313 / 0.014526 (0.082787) | 0.103435 / 0.176557 (-0.073122) | 0.142665 / 0.737135 (-0.594470) | 0.107397 / 0.296338 (-0.188942) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422739 / 0.215209 (0.207530) | 4.220126 / 2.077655 (2.142471) | 1.865447 / 1.504120 (0.361327) | 1.649647 / 1.541195 (0.108453) | 1.711655 / 1.468490 (0.243165) | 0.704269 / 4.584777 (-3.880508) | 3.407390 / 3.745712 (-0.338322) | 1.929224 / 5.269862 (-3.340638) | 1.281225 / 4.565676 (-3.284452) | 0.082924 / 0.424275 (-0.341351) | 0.012588 / 0.007607 (0.004981) | 0.531025 / 0.226044 (0.304980) | 5.339441 / 2.268929 (3.070512) | 2.298969 / 55.444624 (-53.145656) | 1.952145 / 6.876477 (-4.924332) | 2.034754 / 2.142072 (-0.107318) | 0.823672 / 4.805227 (-3.981555) | 0.151465 / 6.500664 (-6.349199) | 0.066663 / 0.075469 (-0.008807) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.258981 / 1.841788 (-0.582807) | 13.791640 / 8.074308 (5.717332) | 14.001514 / 10.191392 (3.810122) | 0.149805 / 0.680424 (-0.530619) | 0.028614 / 0.534201 (-0.505587) | 0.400266 / 0.579283 (-0.179017) | 0.405891 / 0.434364 (-0.028473) | 0.471903 / 0.540337 (-0.068435) | 0.563656 / 1.386936 (-0.823280) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006751 / 0.011353 (-0.004601) | 0.004665 / 0.011008 (-0.006343) | 0.098362 / 0.038508 (0.059854) | 0.027451 / 0.023109 (0.004342) | 0.421859 / 0.275898 (0.145961) | 0.458089 / 0.323480 (0.134609) | 0.004885 / 0.007986 (-0.003101) | 0.003459 / 0.004328 (-0.000870) | 0.075871 / 0.004250 (0.071621) | 0.036591 / 0.037052 (-0.000462) | 0.423307 / 0.258489 (0.164818) | 0.467040 / 0.293841 (0.173199) | 0.031837 / 0.128546 (-0.096710) | 0.011604 / 0.075646 (-0.064042) | 0.321132 / 0.419271 (-0.098140) | 0.041806 / 0.043533 (-0.001727) | 0.421653 / 0.255139 (0.166514) | 0.445896 / 0.283200 (0.162696) | 0.087998 / 0.141683 (-0.053685) | 1.475818 / 1.452155 (0.023664) | 1.559487 / 1.492716 (0.066770) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203096 / 0.018006 (0.185090) | 0.401381 / 0.000490 (0.400892) | 0.004037 / 0.000200 (0.003837) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023757 / 0.037411 (-0.013654) | 0.099919 / 0.014526 (0.085393) | 0.108384 / 0.176557 (-0.068173) | 0.143780 / 0.737135 (-0.593355) | 0.111528 / 0.296338 (-0.184811) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.475896 / 0.215209 (0.260686) | 4.754567 / 2.077655 (2.676912) | 2.444986 / 1.504120 (0.940866) | 2.231055 / 1.541195 (0.689860) | 2.283646 / 1.468490 (0.815156) | 0.701303 / 4.584777 (-3.883474) | 3.381597 / 3.745712 (-0.364115) | 1.878714 / 5.269862 (-3.391148) | 1.171566 / 4.565676 (-3.394111) | 0.083106 / 0.424275 (-0.341169) | 0.012575 / 0.007607 (0.004967) | 0.582570 / 0.226044 (0.356526) | 5.813677 / 2.268929 (3.544748) | 2.908578 / 55.444624 (-52.536046) | 2.548459 / 6.876477 (-4.328017) | 2.581211 / 2.142072 (0.439139) | 0.807925 / 4.805227 (-3.997302) | 0.153516 / 6.500664 (-6.347148) | 0.068763 / 0.075469 (-0.006706) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.249595 / 1.841788 (-0.592193) | 14.208573 / 8.074308 (6.134265) | 14.179174 / 10.191392 (3.987781) | 0.156005 / 0.680424 (-0.524419) | 0.017045 / 0.534201 (-0.517156) | 0.377414 / 0.579283 (-0.201869) | 0.395291 / 0.434364 (-0.039073) | 0.444642 / 0.540337 (-0.095695) | 0.531626 / 1.386936 (-0.855311) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008871 / 0.011353 (-0.002482) | 0.004616 / 0.011008 (-0.006392) | 0.100910 / 0.038508 (0.062402) | 0.030381 / 0.023109 (0.007272) | 0.304636 / 0.275898 (0.028737) | 0.384258 / 0.323480 (0.060778) | 0.007019 / 0.007986 (-0.000966) | 0.004262 / 0.004328 (-0.000066) | 0.077082 / 0.004250 (0.072832) | 0.035235 / 0.037052 (-0.001817) | 0.318293 / 0.258489 (0.059804) | 0.356578 / 0.293841 (0.062737) | 0.033568 / 0.128546 (-0.094978) | 0.011583 / 0.075646 (-0.064063) | 0.322442 / 0.419271 (-0.096830) | 0.041941 / 0.043533 (-0.001592) | 0.310469 / 0.255139 (0.055330) | 0.335626 / 0.283200 (0.052427) | 0.088195 / 0.141683 (-0.053487) | 1.466778 / 1.452155 (0.014623) | 1.512459 / 1.492716 (0.019743) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.184126 / 0.018006 (0.166120) | 0.413392 / 0.000490 (0.412902) | 0.002191 / 0.000200 (0.001992) | 0.000072 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023426 / 0.037411 (-0.013985) | 0.096240 / 0.014526 (0.081715) | 0.105908 / 0.176557 (-0.070648) | 0.146331 / 0.737135 (-0.590804) | 0.107441 / 0.296338 (-0.188898) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420018 / 0.215209 (0.204809) | 4.198129 / 2.077655 (2.120474) | 1.998726 / 1.504120 (0.494606) | 1.870410 / 1.541195 (0.329215) | 1.925160 / 1.468490 (0.456670) | 0.688790 / 4.584777 (-3.895987) | 3.430629 / 3.745712 (-0.315083) | 2.875616 / 5.269862 (-2.394246) | 1.566269 / 4.565676 (-2.999408) | 0.082431 / 0.424275 (-0.341844) | 0.012409 / 0.007607 (0.004802) | 0.536178 / 0.226044 (0.310134) | 5.342918 / 2.268929 (3.073989) | 2.410814 / 55.444624 (-53.033811) | 2.056518 / 6.876477 (-4.819958) | 2.240148 / 2.142072 (0.098075) | 0.804848 / 4.805227 (-4.000379) | 0.147325 / 6.500664 (-6.353340) | 0.064217 / 0.075469 (-0.011252) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.285725 / 1.841788 (-0.556063) | 13.909739 / 8.074308 (5.835431) | 14.025774 / 10.191392 (3.834382) | 0.142413 / 0.680424 (-0.538011) | 0.028390 / 0.534201 (-0.505811) | 0.402345 / 0.579283 (-0.176939) | 0.404341 / 0.434364 (-0.030023) | 0.463055 / 0.540337 (-0.077282) | 0.556811 / 1.386936 (-0.830125) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006557 / 0.011353 (-0.004795) | 0.004668 / 0.011008 (-0.006340) | 0.098839 / 0.038508 (0.060331) | 0.027618 / 0.023109 (0.004508) | 0.409338 / 0.275898 (0.133440) | 0.444048 / 0.323480 (0.120568) | 0.004881 / 0.007986 (-0.003105) | 0.003434 / 0.004328 (-0.000895) | 0.076497 / 0.004250 (0.072247) | 0.038932 / 0.037052 (0.001880) | 0.411419 / 0.258489 (0.152930) | 0.451167 / 0.293841 (0.157326) | 0.031649 / 0.128546 (-0.096897) | 0.011691 / 0.075646 (-0.063955) | 0.321586 / 0.419271 (-0.097685) | 0.041984 / 0.043533 (-0.001549) | 0.407717 / 0.255139 (0.152578) | 0.434687 / 0.283200 (0.151487) | 0.086419 / 0.141683 (-0.055264) | 1.491755 / 1.452155 (0.039601) | 1.569081 / 1.492716 (0.076364) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231746 / 0.018006 (0.213739) | 0.412271 / 0.000490 (0.411781) | 0.000403 / 0.000200 (0.000203) | 0.000063 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024264 / 0.037411 (-0.013147) | 0.100478 / 0.014526 (0.085952) | 0.107065 / 0.176557 (-0.069491) | 0.140724 / 0.737135 (-0.596412) | 0.110631 / 0.296338 (-0.185707) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.472476 / 0.215209 (0.257267) | 4.738919 / 2.077655 (2.661265) | 2.438049 / 1.504120 (0.933929) | 2.237855 / 1.541195 (0.696660) | 2.282885 / 1.468490 (0.814395) | 0.690420 / 4.584777 (-3.894357) | 3.426487 / 3.745712 (-0.319225) | 1.842443 / 5.269862 (-3.427418) | 1.154466 / 4.565676 (-3.411210) | 0.082166 / 0.424275 (-0.342109) | 0.012309 / 0.007607 (0.004701) | 0.574730 / 0.226044 (0.348686) | 5.737566 / 2.268929 (3.468638) | 2.882405 / 55.444624 (-52.562220) | 2.540276 / 6.876477 (-4.336201) | 2.552356 / 2.142072 (0.410283) | 0.796413 / 4.805227 (-4.008815) | 0.152705 / 6.500664 (-6.347959) | 0.068273 / 0.075469 (-0.007196) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.244423 / 1.841788 (-0.597365) | 13.827750 / 8.074308 (5.753442) | 14.074083 / 10.191392 (3.882691) | 0.140291 / 0.680424 (-0.540133) | 0.017337 / 0.534201 (-0.516864) | 0.389314 / 0.579283 (-0.189969) | 0.390914 / 0.434364 (-0.043450) | 0.450333 / 0.540337 (-0.090004) | 0.543860 / 1.386936 (-0.843076) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009490 / 0.011353 (-0.001863) | 0.005211 / 0.011008 (-0.005798) | 0.100884 / 0.038508 (0.062376) | 0.035834 / 0.023109 (0.012725) | 0.293623 / 0.275898 (0.017724) | 0.378118 / 0.323480 (0.054638) | 0.008106 / 0.007986 (0.000120) | 0.005339 / 0.004328 (0.001010) | 0.076311 / 0.004250 (0.072061) | 0.045954 / 0.037052 (0.008902) | 0.308163 / 0.258489 (0.049674) | 0.353470 / 0.293841 (0.059629) | 0.038539 / 0.128546 (-0.090008) | 0.012174 / 0.075646 (-0.063472) | 0.334875 / 0.419271 (-0.084396) | 0.048602 / 0.043533 (0.005069) | 0.295803 / 0.255139 (0.040664) | 0.318894 / 0.283200 (0.035695) | 0.105487 / 0.141683 (-0.036195) | 1.433628 / 1.452155 (-0.018526) | 1.466843 / 1.492716 (-0.025873) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203426 / 0.018006 (0.185419) | 0.456877 / 0.000490 (0.456387) | 0.001452 / 0.000200 (0.001252) | 0.000088 / 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028308 / 0.037411 (-0.009103) | 0.108965 / 0.014526 (0.094439) | 0.119552 / 0.176557 (-0.057005) | 0.156371 / 0.737135 (-0.580765) | 0.124141 / 0.296338 (-0.172197) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.400183 / 0.215209 (0.184973) | 3.990983 / 2.077655 (1.913329) | 1.806729 / 1.504120 (0.302609) | 1.611944 / 1.541195 (0.070750) | 1.740019 / 1.468490 (0.271529) | 0.699600 / 4.584777 (-3.885177) | 3.868711 / 3.745712 (0.122999) | 3.249758 / 5.269862 (-2.020103) | 1.832213 / 4.565676 (-2.733463) | 0.085282 / 0.424275 (-0.338993) | 0.012726 / 0.007607 (0.005119) | 0.509385 / 0.226044 (0.283341) | 5.066913 / 2.268929 (2.797984) | 2.325710 / 55.444624 (-53.118914) | 1.962238 / 6.876477 (-4.914239) | 2.017576 / 2.142072 (-0.124496) | 0.839444 / 4.805227 (-3.965783) | 0.166936 / 6.500664 (-6.333728) | 0.064546 / 0.075469 (-0.010923) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.196396 / 1.841788 (-0.645392) | 15.077063 / 8.074308 (7.002755) | 14.268103 / 10.191392 (4.076711) | 0.163782 / 0.680424 (-0.516642) | 0.028794 / 0.534201 (-0.505407) | 0.440564 / 0.579283 (-0.138719) | 0.439826 / 0.434364 (0.005463) | 0.514786 / 0.540337 (-0.025551) | 0.603353 / 1.386936 (-0.783583) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007874 / 0.011353 (-0.003479) | 0.005347 / 0.011008 (-0.005661) | 0.099461 / 0.038508 (0.060953) | 0.034010 / 0.023109 (0.010901) | 0.384650 / 0.275898 (0.108752) | 0.423827 / 0.323480 (0.100347) | 0.006201 / 0.007986 (-0.001784) | 0.004212 / 0.004328 (-0.000117) | 0.074354 / 0.004250 (0.070104) | 0.051675 / 0.037052 (0.014623) | 0.392488 / 0.258489 (0.133999) | 0.425828 / 0.293841 (0.131987) | 0.037444 / 0.128546 (-0.091103) | 0.012388 / 0.075646 (-0.063258) | 0.334482 / 0.419271 (-0.084789) | 0.050715 / 0.043533 (0.007182) | 0.378323 / 0.255139 (0.123184) | 0.395450 / 0.283200 (0.112250) | 0.108403 / 0.141683 (-0.033280) | 1.426803 / 1.452155 (-0.025352) | 1.532417 / 1.492716 (0.039701) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219989 / 0.018006 (0.201982) | 0.454101 / 0.000490 (0.453611) | 0.000407 / 0.000200 (0.000207) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030590 / 0.037411 (-0.006822) | 0.113483 / 0.014526 (0.098957) | 0.122603 / 0.176557 (-0.053954) | 0.161031 / 0.737135 (-0.576104) | 0.128039 / 0.296338 (-0.168300) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.430458 / 0.215209 (0.215249) | 4.286594 / 2.077655 (2.208940) | 2.056666 / 1.504120 (0.552546) | 1.861142 / 1.541195 (0.319948) | 1.937185 / 1.468490 (0.468695) | 0.701881 / 4.584777 (-3.882896) | 3.970144 / 3.745712 (0.224432) | 2.107118 / 5.269862 (-3.162744) | 1.351561 / 4.565676 (-3.214115) | 0.085470 / 0.424275 (-0.338805) | 0.012366 / 0.007607 (0.004759) | 0.525212 / 0.226044 (0.299168) | 5.301553 / 2.268929 (3.032625) | 2.593862 / 55.444624 (-52.850763) | 2.287315 / 6.876477 (-4.589161) | 2.368249 / 2.142072 (0.226176) | 0.855656 / 4.805227 (-3.949571) | 0.167846 / 6.500664 (-6.332818) | 0.064521 / 0.075469 (-0.010948) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.237008 / 1.841788 (-0.604779) | 15.784303 / 8.074308 (7.709995) | 14.613081 / 10.191392 (4.421689) | 0.161012 / 0.680424 (-0.519412) | 0.017928 / 0.534201 (-0.516273) | 0.423905 / 0.579283 (-0.155378) | 0.428316 / 0.434364 (-0.006048) | 0.500226 / 0.540337 (-0.040112) | 0.606725 / 1.386936 (-0.780211) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008874 / 0.011353 (-0.002479) | 0.004581 / 0.011008 (-0.006428) | 0.100180 / 0.038508 (0.061672) | 0.029990 / 0.023109 (0.006880) | 0.301616 / 0.275898 (0.025718) | 0.343662 / 0.323480 (0.020183) | 0.007111 / 0.007986 (-0.000875) | 0.003428 / 0.004328 (-0.000900) | 0.078031 / 0.004250 (0.073780) | 0.037332 / 0.037052 (0.000279) | 0.301977 / 0.258489 (0.043488) | 0.345581 / 0.293841 (0.051740) | 0.034305 / 0.128546 (-0.094241) | 0.011660 / 0.075646 (-0.063986) | 0.322289 / 0.419271 (-0.096982) | 0.041488 / 0.043533 (-0.002045) | 0.301612 / 0.255139 (0.046473) | 0.328174 / 0.283200 (0.044974) | 0.085561 / 0.141683 (-0.056122) | 1.482114 / 1.452155 (0.029959) | 1.556194 / 1.492716 (0.063478) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.186989 / 0.018006 (0.168983) | 0.421499 / 0.000490 (0.421009) | 0.001193 / 0.000200 (0.000993) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023551 / 0.037411 (-0.013861) | 0.099868 / 0.014526 (0.085343) | 0.105233 / 0.176557 (-0.071324) | 0.141628 / 0.737135 (-0.595507) | 0.109004 / 0.296338 (-0.187335) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.415189 / 0.215209 (0.199979) | 4.145716 / 2.077655 (2.068061) | 1.837917 / 1.504120 (0.333797) | 1.635043 / 1.541195 (0.093848) | 1.683299 / 1.468490 (0.214809) | 0.688538 / 4.584777 (-3.896239) | 3.412628 / 3.745712 (-0.333084) | 1.877456 / 5.269862 (-3.392405) | 1.154129 / 4.565676 (-3.411547) | 0.081850 / 0.424275 (-0.342425) | 0.012309 / 0.007607 (0.004702) | 0.522830 / 0.226044 (0.296785) | 5.238685 / 2.268929 (2.969756) | 2.277840 / 55.444624 (-53.166784) | 1.941787 / 6.876477 (-4.934690) | 1.999688 / 2.142072 (-0.142385) | 0.807590 / 4.805227 (-3.997637) | 0.148157 / 6.500664 (-6.352507) | 0.064898 / 0.075469 (-0.010571) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.253859 / 1.841788 (-0.587929) | 13.676097 / 8.074308 (5.601789) | 14.237837 / 10.191392 (4.046444) | 0.137178 / 0.680424 (-0.543246) | 0.028971 / 0.534201 (-0.505230) | 0.400380 / 0.579283 (-0.178903) | 0.409990 / 0.434364 (-0.024374) | 0.462552 / 0.540337 (-0.077786) | 0.552153 / 1.386936 (-0.834783) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006831 / 0.011353 (-0.004522) | 0.004627 / 0.011008 (-0.006381) | 0.099883 / 0.038508 (0.061375) | 0.028072 / 0.023109 (0.004962) | 0.343556 / 0.275898 (0.067658) | 0.386792 / 0.323480 (0.063312) | 0.005080 / 0.007986 (-0.002906) | 0.003508 / 0.004328 (-0.000820) | 0.077803 / 0.004250 (0.073552) | 0.040038 / 0.037052 (0.002985) | 0.345089 / 0.258489 (0.086600) | 0.396078 / 0.293841 (0.102238) | 0.032241 / 0.128546 (-0.096305) | 0.011711 / 0.075646 (-0.063935) | 0.320531 / 0.419271 (-0.098740) | 0.043658 / 0.043533 (0.000125) | 0.344696 / 0.255139 (0.089557) | 0.389847 / 0.283200 (0.106648) | 0.092328 / 0.141683 (-0.049355) | 1.477290 / 1.452155 (0.025136) | 1.548698 / 1.492716 (0.055982) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.236073 / 0.018006 (0.218067) | 0.422113 / 0.000490 (0.421624) | 0.000431 / 0.000200 (0.000231) | 0.000060 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024738 / 0.037411 (-0.012673) | 0.100546 / 0.014526 (0.086020) | 0.107550 / 0.176557 (-0.069006) | 0.146056 / 0.737135 (-0.591079) | 0.112665 / 0.296338 (-0.183674) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.490259 / 0.215209 (0.275050) | 4.907994 / 2.077655 (2.830339) | 2.547175 / 1.504120 (1.043055) | 2.344419 / 1.541195 (0.803224) | 2.403985 / 1.468490 (0.935495) | 0.696011 / 4.584777 (-3.888766) | 3.442426 / 3.745712 (-0.303286) | 1.878702 / 5.269862 (-3.391159) | 1.158280 / 4.565676 (-3.407396) | 0.082300 / 0.424275 (-0.341975) | 0.012513 / 0.007607 (0.004906) | 0.602696 / 0.226044 (0.376651) | 6.014592 / 2.268929 (3.745663) | 3.014466 / 55.444624 (-52.430159) | 2.669376 / 6.876477 (-4.207101) | 2.724485 / 2.142072 (0.582412) | 0.799795 / 4.805227 (-4.005432) | 0.151220 / 6.500664 (-6.349444) | 0.067486 / 0.075469 (-0.007983) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.281265 / 1.841788 (-0.560523) | 14.362284 / 8.074308 (6.287976) | 14.313690 / 10.191392 (4.122298) | 0.142870 / 0.680424 (-0.537554) | 0.017206 / 0.534201 (-0.516995) | 0.380084 / 0.579283 (-0.199199) | 0.388161 / 0.434364 (-0.046203) | 0.442617 / 0.540337 (-0.097721) | 0.528487 / 1.386936 (-0.858449) |\n\n</details>\n</details>\n\n\n",
"@lhoestq @amyeroberts @gante I did a substantial rewrite and all tests are passing now (Windows seems to time out or something and I can't figure out why - not sure if that's related to this PR!). I also confirmed tests are passing locally with Py==3.10. \r\n\r\nAside from incorporating everyone's comments, I also made a context manager to create and handle shared memory - this ensures that shared memory is cleaned up even if execution is interrupted. Also, shared memory names include a UUID string now to avoid collisions. Finally, string arrays are now split up into fixed-width character arrays in the workers so that they can be passed through shared memory, and the parent process reconstructs them into string arrays.",
"Update: `test_arrow_dataset.py` ran fine in this branch on my Windows machine (Py 3.10), so I have no idea what's up with those tests",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008852 / 0.011353 (-0.002500) | 0.004545 / 0.011008 (-0.006464) | 0.099814 / 0.038508 (0.061306) | 0.030314 / 0.023109 (0.007205) | 0.310426 / 0.275898 (0.034528) | 0.366893 / 0.323480 (0.043413) | 0.007183 / 0.007986 (-0.000802) | 0.003476 / 0.004328 (-0.000853) | 0.077566 / 0.004250 (0.073315) | 0.038269 / 0.037052 (0.001217) | 0.319133 / 0.258489 (0.060644) | 0.352399 / 0.293841 (0.058558) | 0.033847 / 0.128546 (-0.094700) | 0.011568 / 0.075646 (-0.064078) | 0.321355 / 0.419271 (-0.097917) | 0.040719 / 0.043533 (-0.002814) | 0.304812 / 0.255139 (0.049673) | 0.329512 / 0.283200 (0.046312) | 0.088045 / 0.141683 (-0.053638) | 1.514182 / 1.452155 (0.062027) | 1.529459 / 1.492716 (0.036742) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216749 / 0.018006 (0.198743) | 0.409909 / 0.000490 (0.409419) | 0.002790 / 0.000200 (0.002590) | 0.000081 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023390 / 0.037411 (-0.014021) | 0.095955 / 0.014526 (0.081430) | 0.104749 / 0.176557 (-0.071807) | 0.143414 / 0.737135 (-0.593721) | 0.109011 / 0.296338 (-0.187328) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420410 / 0.215209 (0.205201) | 4.185745 / 2.077655 (2.108090) | 1.910207 / 1.504120 (0.406087) | 1.679330 / 1.541195 (0.138135) | 1.727134 / 1.468490 (0.258644) | 0.692379 / 4.584777 (-3.892398) | 3.358731 / 3.745712 (-0.386982) | 2.914657 / 5.269862 (-2.355205) | 1.506083 / 4.565676 (-3.059594) | 0.081922 / 0.424275 (-0.342353) | 0.012691 / 0.007607 (0.005084) | 0.530942 / 0.226044 (0.304897) | 5.357642 / 2.268929 (3.088714) | 2.387347 / 55.444624 (-53.057277) | 2.030001 / 6.876477 (-4.846476) | 2.026405 / 2.142072 (-0.115667) | 0.809406 / 4.805227 (-3.995821) | 0.149003 / 6.500664 (-6.351661) | 0.066910 / 0.075469 (-0.008559) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.278160 / 1.841788 (-0.563627) | 13.632742 / 8.074308 (5.558434) | 13.995537 / 10.191392 (3.804145) | 0.136507 / 0.680424 (-0.543917) | 0.028817 / 0.534201 (-0.505384) | 0.394842 / 0.579283 (-0.184441) | 0.399526 / 0.434364 (-0.034838) | 0.459174 / 0.540337 (-0.081163) | 0.536877 / 1.386936 (-0.850059) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006814 / 0.011353 (-0.004539) | 0.004456 / 0.011008 (-0.006552) | 0.098386 / 0.038508 (0.059878) | 0.028124 / 0.023109 (0.005015) | 0.409004 / 0.275898 (0.133106) | 0.446746 / 0.323480 (0.123266) | 0.005108 / 0.007986 (-0.002877) | 0.004807 / 0.004328 (0.000479) | 0.075751 / 0.004250 (0.071500) | 0.039297 / 0.037052 (0.002244) | 0.413198 / 0.258489 (0.154709) | 0.452124 / 0.293841 (0.158283) | 0.032534 / 0.128546 (-0.096012) | 0.011689 / 0.075646 (-0.063957) | 0.325465 / 0.419271 (-0.093806) | 0.041347 / 0.043533 (-0.002185) | 0.411489 / 0.255139 (0.156350) | 0.447120 / 0.283200 (0.163920) | 0.093058 / 0.141683 (-0.048625) | 1.489903 / 1.452155 (0.037748) | 1.580771 / 1.492716 (0.088055) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.192619 / 0.018006 (0.174613) | 0.399201 / 0.000490 (0.398711) | 0.002894 / 0.000200 (0.002694) | 0.000071 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025120 / 0.037411 (-0.012292) | 0.100126 / 0.014526 (0.085600) | 0.108669 / 0.176557 (-0.067887) | 0.148687 / 0.737135 (-0.588448) | 0.112286 / 0.296338 (-0.184052) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.438866 / 0.215209 (0.223657) | 4.382418 / 2.077655 (2.304764) | 2.106450 / 1.504120 (0.602330) | 1.885105 / 1.541195 (0.343910) | 1.922948 / 1.468490 (0.454458) | 0.693145 / 4.584777 (-3.891632) | 3.378206 / 3.745712 (-0.367506) | 1.867295 / 5.269862 (-3.402566) | 1.164999 / 4.565676 (-3.400678) | 0.081918 / 0.424275 (-0.342357) | 0.012225 / 0.007607 (0.004618) | 0.547114 / 0.226044 (0.321069) | 5.454208 / 2.268929 (3.185279) | 2.532112 / 55.444624 (-52.912512) | 2.192573 / 6.876477 (-4.683904) | 2.225364 / 2.142072 (0.083291) | 0.797165 / 4.805227 (-4.008062) | 0.151185 / 6.500664 (-6.349480) | 0.067512 / 0.075469 (-0.007957) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.303905 / 1.841788 (-0.537883) | 14.107678 / 8.074308 (6.033370) | 14.147630 / 10.191392 (3.956238) | 0.156597 / 0.680424 (-0.523827) | 0.017037 / 0.534201 (-0.517164) | 0.383202 / 0.579283 (-0.196081) | 0.385340 / 0.434364 (-0.049024) | 0.443338 / 0.540337 (-0.097000) | 0.542345 / 1.386936 (-0.844591) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009982 / 0.011353 (-0.001371) | 0.005327 / 0.011008 (-0.005681) | 0.099092 / 0.038508 (0.060584) | 0.035824 / 0.023109 (0.012715) | 0.303258 / 0.275898 (0.027360) | 0.335379 / 0.323480 (0.011899) | 0.008192 / 0.007986 (0.000207) | 0.004242 / 0.004328 (-0.000087) | 0.076277 / 0.004250 (0.072026) | 0.043851 / 0.037052 (0.006799) | 0.307750 / 0.258489 (0.049261) | 0.348459 / 0.293841 (0.054618) | 0.038943 / 0.128546 (-0.089604) | 0.012128 / 0.075646 (-0.063519) | 0.334143 / 0.419271 (-0.085128) | 0.047865 / 0.043533 (0.004332) | 0.300909 / 0.255139 (0.045770) | 0.320879 / 0.283200 (0.037680) | 0.103812 / 0.141683 (-0.037871) | 1.468646 / 1.452155 (0.016491) | 1.557660 / 1.492716 (0.064944) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.244108 / 0.018006 (0.226102) | 0.554895 / 0.000490 (0.554405) | 0.005311 / 0.000200 (0.005111) | 0.000120 / 0.000054 (0.000065) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028771 / 0.037411 (-0.008640) | 0.108133 / 0.014526 (0.093608) | 0.120098 / 0.176557 (-0.056458) | 0.159815 / 0.737135 (-0.577320) | 0.125437 / 0.296338 (-0.170901) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.397675 / 0.215209 (0.182466) | 3.975839 / 2.077655 (1.898184) | 1.797803 / 1.504120 (0.293683) | 1.612517 / 1.541195 (0.071322) | 1.659086 / 1.468490 (0.190596) | 0.679822 / 4.584777 (-3.904955) | 3.688321 / 3.745712 (-0.057391) | 2.155285 / 5.269862 (-3.114576) | 1.466453 / 4.565676 (-3.099223) | 0.084102 / 0.424275 (-0.340173) | 0.012074 / 0.007607 (0.004467) | 0.503744 / 0.226044 (0.277699) | 5.075599 / 2.268929 (2.806670) | 2.312149 / 55.444624 (-53.132476) | 1.975028 / 6.876477 (-4.901449) | 2.069554 / 2.142072 (-0.072519) | 0.828329 / 4.805227 (-3.976898) | 0.162816 / 6.500664 (-6.337849) | 0.063813 / 0.075469 (-0.011656) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.173327 / 1.841788 (-0.668461) | 15.281584 / 8.074308 (7.207276) | 14.450851 / 10.191392 (4.259459) | 0.165621 / 0.680424 (-0.514802) | 0.028779 / 0.534201 (-0.505422) | 0.438483 / 0.579283 (-0.140800) | 0.438477 / 0.434364 (0.004113) | 0.517703 / 0.540337 (-0.022634) | 0.615119 / 1.386936 (-0.771817) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007013 / 0.011353 (-0.004340) | 0.005272 / 0.011008 (-0.005736) | 0.097203 / 0.038508 (0.058695) | 0.033103 / 0.023109 (0.009994) | 0.380203 / 0.275898 (0.104305) | 0.414868 / 0.323480 (0.091388) | 0.006326 / 0.007986 (-0.001659) | 0.005433 / 0.004328 (0.001104) | 0.074299 / 0.004250 (0.070049) | 0.049418 / 0.037052 (0.012366) | 0.388771 / 0.258489 (0.130282) | 0.435169 / 0.293841 (0.141328) | 0.036170 / 0.128546 (-0.092377) | 0.012452 / 0.075646 (-0.063195) | 0.331215 / 0.419271 (-0.088056) | 0.048577 / 0.043533 (0.005044) | 0.381491 / 0.255139 (0.126352) | 0.396731 / 0.283200 (0.113531) | 0.106435 / 0.141683 (-0.035248) | 1.446437 / 1.452155 (-0.005718) | 1.542337 / 1.492716 (0.049621) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216714 / 0.018006 (0.198707) | 0.562460 / 0.000490 (0.561970) | 0.003636 / 0.000200 (0.003436) | 0.000100 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028726 / 0.037411 (-0.008686) | 0.111993 / 0.014526 (0.097467) | 0.125325 / 0.176557 (-0.051232) | 0.157779 / 0.737135 (-0.579356) | 0.130633 / 0.296338 (-0.165705) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440520 / 0.215209 (0.225311) | 4.396283 / 2.077655 (2.318628) | 2.204714 / 1.504120 (0.700594) | 2.011667 / 1.541195 (0.470473) | 2.050518 / 1.468490 (0.582028) | 0.695204 / 4.584777 (-3.889573) | 3.779699 / 3.745712 (0.033987) | 2.096064 / 5.269862 (-3.173798) | 1.325446 / 4.565676 (-3.240230) | 0.085315 / 0.424275 (-0.338960) | 0.012178 / 0.007607 (0.004570) | 0.550478 / 0.226044 (0.324434) | 5.471872 / 2.268929 (3.202943) | 2.687147 / 55.444624 (-52.757478) | 2.348465 / 6.876477 (-4.528011) | 2.409700 / 2.142072 (0.267628) | 0.839468 / 4.805227 (-3.965760) | 0.167030 / 6.500664 (-6.333635) | 0.063243 / 0.075469 (-0.012226) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.257347 / 1.841788 (-0.584441) | 15.157821 / 8.074308 (7.083512) | 14.646381 / 10.191392 (4.454989) | 0.185550 / 0.680424 (-0.494874) | 0.018441 / 0.534201 (-0.515760) | 0.423330 / 0.579283 (-0.155954) | 0.426204 / 0.434364 (-0.008160) | 0.498985 / 0.540337 (-0.041352) | 0.608432 / 1.386936 (-0.778504) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010856 / 0.011353 (-0.000497) | 0.005897 / 0.011008 (-0.005111) | 0.117826 / 0.038508 (0.079317) | 0.041899 / 0.023109 (0.018790) | 0.353804 / 0.275898 (0.077906) | 0.431021 / 0.323480 (0.107541) | 0.009288 / 0.007986 (0.001303) | 0.004556 / 0.004328 (0.000227) | 0.089344 / 0.004250 (0.085094) | 0.052224 / 0.037052 (0.015172) | 0.373242 / 0.258489 (0.114753) | 0.420667 / 0.293841 (0.126826) | 0.044191 / 0.128546 (-0.084355) | 0.014083 / 0.075646 (-0.061564) | 0.400373 / 0.419271 (-0.018898) | 0.056119 / 0.043533 (0.012586) | 0.363302 / 0.255139 (0.108163) | 0.382073 / 0.283200 (0.098873) | 0.118646 / 0.141683 (-0.023037) | 1.696576 / 1.452155 (0.244422) | 1.756518 / 1.492716 (0.263802) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216388 / 0.018006 (0.198382) | 0.485732 / 0.000490 (0.485242) | 0.004012 / 0.000200 (0.003812) | 0.000104 / 0.000054 (0.000050) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032095 / 0.037411 (-0.005316) | 0.128954 / 0.014526 (0.114429) | 0.137564 / 0.176557 (-0.038993) | 0.184315 / 0.737135 (-0.552820) | 0.144707 / 0.296338 (-0.151631) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.472792 / 0.215209 (0.257583) | 4.723044 / 2.077655 (2.645390) | 2.115075 / 1.504120 (0.610955) | 1.898993 / 1.541195 (0.357798) | 1.972894 / 1.468490 (0.504404) | 0.807210 / 4.584777 (-3.777567) | 4.493139 / 3.745712 (0.747427) | 2.501053 / 5.269862 (-2.768808) | 1.686121 / 4.565676 (-2.879556) | 0.099545 / 0.424275 (-0.324730) | 0.014360 / 0.007607 (0.006753) | 0.596235 / 0.226044 (0.370191) | 5.944285 / 2.268929 (3.675357) | 2.654944 / 55.444624 (-52.789681) | 2.281451 / 6.876477 (-4.595026) | 2.448407 / 2.142072 (0.306334) | 1.000512 / 4.805227 (-3.804716) | 0.196413 / 6.500664 (-6.304251) | 0.075810 / 0.075469 (0.000341) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.435707 / 1.841788 (-0.406081) | 17.931070 / 8.074308 (9.856762) | 16.635522 / 10.191392 (6.444130) | 0.189119 / 0.680424 (-0.491304) | 0.034392 / 0.534201 (-0.499809) | 0.519041 / 0.579283 (-0.060242) | 0.516159 / 0.434364 (0.081795) | 0.601180 / 0.540337 (0.060843) | 0.713180 / 1.386936 (-0.673756) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008741 / 0.011353 (-0.002612) | 0.006102 / 0.011008 (-0.004906) | 0.114787 / 0.038508 (0.076279) | 0.039610 / 0.023109 (0.016501) | 0.451730 / 0.275898 (0.175832) | 0.488820 / 0.323480 (0.165340) | 0.006979 / 0.007986 (-0.001006) | 0.006458 / 0.004328 (0.002130) | 0.086505 / 0.004250 (0.082254) | 0.057684 / 0.037052 (0.020632) | 0.451354 / 0.258489 (0.192865) | 0.523143 / 0.293841 (0.229302) | 0.043224 / 0.128546 (-0.085323) | 0.014671 / 0.075646 (-0.060975) | 0.398030 / 0.419271 (-0.021241) | 0.063650 / 0.043533 (0.020117) | 0.448324 / 0.255139 (0.193185) | 0.476560 / 0.283200 (0.193361) | 0.125772 / 0.141683 (-0.015911) | 1.801051 / 1.452155 (0.348896) | 1.872736 / 1.492716 (0.380020) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.256146 / 0.018006 (0.238139) | 0.486915 / 0.000490 (0.486425) | 0.000513 / 0.000200 (0.000313) | 0.000067 / 0.000054 (0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035242 / 0.037411 (-0.002170) | 0.134322 / 0.014526 (0.119797) | 0.144786 / 0.176557 (-0.031770) | 0.188786 / 0.737135 (-0.548349) | 0.151737 / 0.296338 (-0.144602) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.506047 / 0.215209 (0.290838) | 5.028253 / 2.077655 (2.950598) | 2.393070 / 1.504120 (0.888950) | 2.157847 / 1.541195 (0.616652) | 2.229412 / 1.468490 (0.760922) | 0.828973 / 4.584777 (-3.755804) | 4.741470 / 3.745712 (0.995758) | 4.048118 / 5.269862 (-1.221744) | 2.573818 / 4.565676 (-1.991859) | 0.101019 / 0.424275 (-0.323256) | 0.014640 / 0.007607 (0.007033) | 0.632591 / 0.226044 (0.406546) | 6.289153 / 2.268929 (4.020224) | 2.977261 / 55.444624 (-52.467363) | 2.554396 / 6.876477 (-4.322081) | 2.619446 / 2.142072 (0.477374) | 0.988376 / 4.805227 (-3.816851) | 0.196895 / 6.500664 (-6.303769) | 0.076355 / 0.075469 (0.000886) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.493570 / 1.841788 (-0.348218) | 18.422758 / 8.074308 (10.348449) | 17.007352 / 10.191392 (6.815960) | 0.191903 / 0.680424 (-0.488521) | 0.020974 / 0.534201 (-0.513227) | 0.500573 / 0.579283 (-0.078710) | 0.489381 / 0.434364 (0.055017) | 0.580765 / 0.540337 (0.040428) | 0.698907 / 1.386936 (-0.688029) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008979 / 0.011353 (-0.002374) | 0.004497 / 0.011008 (-0.006511) | 0.102227 / 0.038508 (0.063719) | 0.031302 / 0.023109 (0.008193) | 0.298488 / 0.275898 (0.022590) | 0.372589 / 0.323480 (0.049109) | 0.007261 / 0.007986 (-0.000725) | 0.003542 / 0.004328 (-0.000786) | 0.078503 / 0.004250 (0.074253) | 0.039474 / 0.037052 (0.002422) | 0.310991 / 0.258489 (0.052502) | 0.353245 / 0.293841 (0.059404) | 0.033798 / 0.128546 (-0.094749) | 0.011634 / 0.075646 (-0.064012) | 0.321141 / 0.419271 (-0.098131) | 0.041264 / 0.043533 (-0.002268) | 0.300900 / 0.255139 (0.045761) | 0.326255 / 0.283200 (0.043055) | 0.092477 / 0.141683 (-0.049205) | 1.478921 / 1.452155 (0.026766) | 1.514915 / 1.492716 (0.022198) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.184415 / 0.018006 (0.166408) | 0.428986 / 0.000490 (0.428497) | 0.002590 / 0.000200 (0.002390) | 0.000072 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023730 / 0.037411 (-0.013681) | 0.099846 / 0.014526 (0.085320) | 0.107075 / 0.176557 (-0.069482) | 0.147475 / 0.737135 (-0.589661) | 0.111802 / 0.296338 (-0.184537) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.413704 / 0.215209 (0.198494) | 4.144498 / 2.077655 (2.066843) | 1.855900 / 1.504120 (0.351780) | 1.647958 / 1.541195 (0.106763) | 1.712437 / 1.468490 (0.243947) | 0.688382 / 4.584777 (-3.896395) | 3.432136 / 3.745712 (-0.313576) | 2.837211 / 5.269862 (-2.432651) | 1.519004 / 4.565676 (-3.046672) | 0.082429 / 0.424275 (-0.341846) | 0.012610 / 0.007607 (0.005003) | 0.525078 / 0.226044 (0.299034) | 5.272932 / 2.268929 (3.004003) | 2.340482 / 55.444624 (-53.104143) | 2.007372 / 6.876477 (-4.869104) | 2.060567 / 2.142072 (-0.081506) | 0.806476 / 4.805227 (-3.998752) | 0.149421 / 6.500664 (-6.351243) | 0.066252 / 0.075469 (-0.009218) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.235078 / 1.841788 (-0.606710) | 13.870758 / 8.074308 (5.796450) | 14.104582 / 10.191392 (3.913190) | 0.159375 / 0.680424 (-0.521049) | 0.029233 / 0.534201 (-0.504968) | 0.392184 / 0.579283 (-0.187099) | 0.407909 / 0.434364 (-0.026455) | 0.458757 / 0.540337 (-0.081581) | 0.547681 / 1.386936 (-0.839255) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007194 / 0.011353 (-0.004159) | 0.004578 / 0.011008 (-0.006431) | 0.098936 / 0.038508 (0.060428) | 0.029639 / 0.023109 (0.006530) | 0.347241 / 0.275898 (0.071343) | 0.378838 / 0.323480 (0.055358) | 0.005632 / 0.007986 (-0.002353) | 0.003469 / 0.004328 (-0.000860) | 0.075536 / 0.004250 (0.071285) | 0.043301 / 0.037052 (0.006249) | 0.348091 / 0.258489 (0.089602) | 0.388595 / 0.293841 (0.094754) | 0.033512 / 0.128546 (-0.095034) | 0.011754 / 0.075646 (-0.063892) | 0.321003 / 0.419271 (-0.098268) | 0.044634 / 0.043533 (0.001101) | 0.346688 / 0.255139 (0.091549) | 0.366346 / 0.283200 (0.083147) | 0.093650 / 0.141683 (-0.048033) | 1.509913 / 1.452155 (0.057759) | 1.596414 / 1.492716 (0.103698) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230466 / 0.018006 (0.212459) | 0.417106 / 0.000490 (0.416617) | 0.000959 / 0.000200 (0.000759) | 0.000070 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025581 / 0.037411 (-0.011830) | 0.105246 / 0.014526 (0.090720) | 0.108997 / 0.176557 (-0.067560) | 0.144342 / 0.737135 (-0.592794) | 0.113911 / 0.296338 (-0.182427) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.479608 / 0.215209 (0.264399) | 4.766081 / 2.077655 (2.688426) | 2.446597 / 1.504120 (0.942477) | 2.228278 / 1.541195 (0.687083) | 2.289943 / 1.468490 (0.821453) | 0.703146 / 4.584777 (-3.881631) | 3.414150 / 3.745712 (-0.331562) | 2.957730 / 5.269862 (-2.312132) | 1.531524 / 4.565676 (-3.034152) | 0.083449 / 0.424275 (-0.340826) | 0.012684 / 0.007607 (0.005077) | 0.587622 / 0.226044 (0.361578) | 5.888791 / 2.268929 (3.619863) | 2.884200 / 55.444624 (-52.560424) | 2.543739 / 6.876477 (-4.332737) | 2.596245 / 2.142072 (0.454173) | 0.813070 / 4.805227 (-3.992157) | 0.152706 / 6.500664 (-6.347958) | 0.069257 / 0.075469 (-0.006212) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.302945 / 1.841788 (-0.538842) | 14.484051 / 8.074308 (6.409743) | 14.216143 / 10.191392 (4.024751) | 0.154537 / 0.680424 (-0.525886) | 0.016909 / 0.534201 (-0.517292) | 0.389433 / 0.579283 (-0.189850) | 0.393280 / 0.434364 (-0.041084) | 0.446884 / 0.540337 (-0.093453) | 0.534394 / 1.386936 (-0.852542) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008822 / 0.011353 (-0.002530) | 0.004826 / 0.011008 (-0.006182) | 0.102710 / 0.038508 (0.064202) | 0.030353 / 0.023109 (0.007244) | 0.297224 / 0.275898 (0.021326) | 0.371861 / 0.323480 (0.048381) | 0.007266 / 0.007986 (-0.000720) | 0.003632 / 0.004328 (-0.000696) | 0.079960 / 0.004250 (0.075710) | 0.036908 / 0.037052 (-0.000144) | 0.309582 / 0.258489 (0.051093) | 0.350108 / 0.293841 (0.056267) | 0.034280 / 0.128546 (-0.094266) | 0.011739 / 0.075646 (-0.063907) | 0.323217 / 0.419271 (-0.096054) | 0.043491 / 0.043533 (-0.000042) | 0.298454 / 0.255139 (0.043315) | 0.326735 / 0.283200 (0.043535) | 0.093955 / 0.141683 (-0.047728) | 1.494313 / 1.452155 (0.042159) | 1.562104 / 1.492716 (0.069388) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.182796 / 0.018006 (0.164790) | 0.420133 / 0.000490 (0.419643) | 0.002537 / 0.000200 (0.002337) | 0.000070 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023143 / 0.037411 (-0.014269) | 0.098560 / 0.014526 (0.084034) | 0.105060 / 0.176557 (-0.071496) | 0.140269 / 0.737135 (-0.596866) | 0.109120 / 0.296338 (-0.187219) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.419907 / 0.215209 (0.204698) | 4.196179 / 2.077655 (2.118524) | 1.887663 / 1.504120 (0.383543) | 1.686232 / 1.541195 (0.145037) | 1.741741 / 1.468490 (0.273251) | 0.696222 / 4.584777 (-3.888555) | 3.400250 / 3.745712 (-0.345462) | 1.875058 / 5.269862 (-3.394803) | 1.159466 / 4.565676 (-3.406211) | 0.082520 / 0.424275 (-0.341755) | 0.012408 / 0.007607 (0.004801) | 0.525212 / 0.226044 (0.299168) | 5.283691 / 2.268929 (3.014762) | 2.314487 / 55.444624 (-53.130138) | 1.966212 / 6.876477 (-4.910265) | 2.023458 / 2.142072 (-0.118615) | 0.808896 / 4.805227 (-3.996331) | 0.148973 / 6.500664 (-6.351691) | 0.065378 / 0.075469 (-0.010091) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.223833 / 1.841788 (-0.617955) | 14.053651 / 8.074308 (5.979343) | 14.072165 / 10.191392 (3.880773) | 0.156006 / 0.680424 (-0.524418) | 0.028665 / 0.534201 (-0.505536) | 0.392099 / 0.579283 (-0.187184) | 0.401460 / 0.434364 (-0.032904) | 0.462184 / 0.540337 (-0.078153) | 0.540459 / 1.386936 (-0.846477) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006907 / 0.011353 (-0.004446) | 0.004585 / 0.011008 (-0.006423) | 0.099027 / 0.038508 (0.060519) | 0.028317 / 0.023109 (0.005208) | 0.421068 / 0.275898 (0.145170) | 0.450712 / 0.323480 (0.127233) | 0.005229 / 0.007986 (-0.002756) | 0.004873 / 0.004328 (0.000545) | 0.077374 / 0.004250 (0.073124) | 0.042530 / 0.037052 (0.005477) | 0.417392 / 0.258489 (0.158903) | 0.462605 / 0.293841 (0.168764) | 0.032195 / 0.128546 (-0.096351) | 0.011777 / 0.075646 (-0.063870) | 0.321927 / 0.419271 (-0.097344) | 0.041999 / 0.043533 (-0.001533) | 0.419402 / 0.255139 (0.164263) | 0.437179 / 0.283200 (0.153979) | 0.089549 / 0.141683 (-0.052134) | 1.469525 / 1.452155 (0.017370) | 1.586407 / 1.492716 (0.093691) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.209533 / 0.018006 (0.191526) | 0.413886 / 0.000490 (0.413396) | 0.003357 / 0.000200 (0.003157) | 0.000121 / 0.000054 (0.000067) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026133 / 0.037411 (-0.011278) | 0.103128 / 0.014526 (0.088602) | 0.110604 / 0.176557 (-0.065952) | 0.153055 / 0.737135 (-0.584080) | 0.112257 / 0.296338 (-0.184081) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.471281 / 0.215209 (0.256072) | 4.708361 / 2.077655 (2.630706) | 2.572681 / 1.504120 (1.068561) | 2.370536 / 1.541195 (0.829341) | 2.456010 / 1.468490 (0.987520) | 0.694173 / 4.584777 (-3.890603) | 3.434511 / 3.745712 (-0.311201) | 1.877169 / 5.269862 (-3.392693) | 1.158387 / 4.565676 (-3.407289) | 0.081849 / 0.424275 (-0.342426) | 0.012176 / 0.007607 (0.004569) | 0.581736 / 0.226044 (0.355692) | 5.803173 / 2.268929 (3.534245) | 3.040003 / 55.444624 (-52.404621) | 2.704698 / 6.876477 (-4.171779) | 2.760138 / 2.142072 (0.618065) | 0.802557 / 4.805227 (-4.002671) | 0.151397 / 6.500664 (-6.349268) | 0.068308 / 0.075469 (-0.007161) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.304062 / 1.841788 (-0.537725) | 14.364809 / 8.074308 (6.290501) | 14.192131 / 10.191392 (4.000739) | 0.150025 / 0.680424 (-0.530399) | 0.017020 / 0.534201 (-0.517181) | 0.389235 / 0.579283 (-0.190048) | 0.387557 / 0.434364 (-0.046807) | 0.454636 / 0.540337 (-0.085702) | 0.558182 / 1.386936 (-0.828754) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008519 / 0.011353 (-0.002834) | 0.004538 / 0.011008 (-0.006470) | 0.102066 / 0.038508 (0.063558) | 0.029700 / 0.023109 (0.006591) | 0.304573 / 0.275898 (0.028675) | 0.366232 / 0.323480 (0.042752) | 0.007154 / 0.007986 (-0.000832) | 0.003497 / 0.004328 (-0.000831) | 0.079119 / 0.004250 (0.074868) | 0.036088 / 0.037052 (-0.000964) | 0.311076 / 0.258489 (0.052587) | 0.352205 / 0.293841 (0.058364) | 0.033706 / 0.128546 (-0.094840) | 0.011657 / 0.075646 (-0.063990) | 0.324024 / 0.419271 (-0.095247) | 0.040777 / 0.043533 (-0.002756) | 0.302661 / 0.255139 (0.047522) | 0.329091 / 0.283200 (0.045891) | 0.086774 / 0.141683 (-0.054909) | 1.485874 / 1.452155 (0.033720) | 1.535726 / 1.492716 (0.043009) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.194284 / 0.018006 (0.176277) | 0.412875 / 0.000490 (0.412385) | 0.003348 / 0.000200 (0.003148) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022432 / 0.037411 (-0.014979) | 0.095008 / 0.014526 (0.080482) | 0.103268 / 0.176557 (-0.073288) | 0.140121 / 0.737135 (-0.597014) | 0.106619 / 0.296338 (-0.189719) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.414786 / 0.215209 (0.199577) | 4.146345 / 2.077655 (2.068690) | 1.873703 / 1.504120 (0.369583) | 1.673498 / 1.541195 (0.132303) | 1.716993 / 1.468490 (0.248502) | 0.692098 / 4.584777 (-3.892679) | 3.380991 / 3.745712 (-0.364721) | 1.846811 / 5.269862 (-3.423050) | 1.159617 / 4.565676 (-3.406059) | 0.081867 / 0.424275 (-0.342408) | 0.012371 / 0.007607 (0.004764) | 0.526228 / 0.226044 (0.300184) | 5.273139 / 2.268929 (3.004211) | 2.327147 / 55.444624 (-53.117477) | 1.968366 / 6.876477 (-4.908111) | 2.018053 / 2.142072 (-0.124019) | 0.816098 / 4.805227 (-3.989130) | 0.149438 / 6.500664 (-6.351226) | 0.065000 / 0.075469 (-0.010469) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.244408 / 1.841788 (-0.597380) | 13.774354 / 8.074308 (5.700046) | 14.178923 / 10.191392 (3.987531) | 0.150032 / 0.680424 (-0.530392) | 0.029736 / 0.534201 (-0.504465) | 0.399134 / 0.579283 (-0.180149) | 0.404214 / 0.434364 (-0.030150) | 0.462096 / 0.540337 (-0.078242) | 0.542256 / 1.386936 (-0.844680) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006776 / 0.011353 (-0.004577) | 0.004586 / 0.011008 (-0.006422) | 0.097658 / 0.038508 (0.059150) | 0.027627 / 0.023109 (0.004517) | 0.423794 / 0.275898 (0.147896) | 0.447443 / 0.323480 (0.123963) | 0.005099 / 0.007986 (-0.002886) | 0.004846 / 0.004328 (0.000517) | 0.075135 / 0.004250 (0.070884) | 0.038068 / 0.037052 (0.001016) | 0.420999 / 0.258489 (0.162510) | 0.460368 / 0.293841 (0.166527) | 0.032107 / 0.128546 (-0.096439) | 0.011775 / 0.075646 (-0.063871) | 0.323854 / 0.419271 (-0.095418) | 0.045538 / 0.043533 (0.002005) | 0.420949 / 0.255139 (0.165810) | 0.441906 / 0.283200 (0.158706) | 0.091955 / 0.141683 (-0.049728) | 1.523736 / 1.452155 (0.071581) | 1.587865 / 1.492716 (0.095148) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.263297 / 0.018006 (0.245290) | 0.416170 / 0.000490 (0.415680) | 0.023161 / 0.000200 (0.022961) | 0.000243 / 0.000054 (0.000188) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024000 / 0.037411 (-0.013412) | 0.097787 / 0.014526 (0.083262) | 0.106884 / 0.176557 (-0.069672) | 0.140861 / 0.737135 (-0.596274) | 0.108228 / 0.296338 (-0.188111) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.477222 / 0.215209 (0.262013) | 4.774729 / 2.077655 (2.697074) | 2.451575 / 1.504120 (0.947455) | 2.251255 / 1.541195 (0.710060) | 2.281154 / 1.468490 (0.812664) | 0.699394 / 4.584777 (-3.885383) | 3.421575 / 3.745712 (-0.324137) | 2.704713 / 5.269862 (-2.565148) | 1.508464 / 4.565676 (-3.057212) | 0.082199 / 0.424275 (-0.342076) | 0.012586 / 0.007607 (0.004979) | 0.588783 / 0.226044 (0.362739) | 5.878434 / 2.268929 (3.609505) | 2.927422 / 55.444624 (-52.517202) | 2.574357 / 6.876477 (-4.302120) | 2.603626 / 2.142072 (0.461554) | 0.804706 / 4.805227 (-4.000521) | 0.152919 / 6.500664 (-6.347745) | 0.069316 / 0.075469 (-0.006153) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.280025 / 1.841788 (-0.561763) | 13.968407 / 8.074308 (5.894099) | 13.874506 / 10.191392 (3.683114) | 0.154711 / 0.680424 (-0.525713) | 0.016827 / 0.534201 (-0.517374) | 0.377775 / 0.579283 (-0.201508) | 0.393035 / 0.434364 (-0.041329) | 0.439405 / 0.540337 (-0.100932) | 0.528135 / 1.386936 (-0.858801) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009035 / 0.011353 (-0.002318) | 0.004518 / 0.011008 (-0.006490) | 0.102077 / 0.038508 (0.063569) | 0.030169 / 0.023109 (0.007060) | 0.297713 / 0.275898 (0.021815) | 0.364976 / 0.323480 (0.041496) | 0.007079 / 0.007986 (-0.000906) | 0.003438 / 0.004328 (-0.000890) | 0.079667 / 0.004250 (0.075416) | 0.035890 / 0.037052 (-0.001162) | 0.306065 / 0.258489 (0.047576) | 0.352133 / 0.293841 (0.058292) | 0.033800 / 0.128546 (-0.094746) | 0.011613 / 0.075646 (-0.064034) | 0.322917 / 0.419271 (-0.096354) | 0.040973 / 0.043533 (-0.002560) | 0.300896 / 0.255139 (0.045757) | 0.331540 / 0.283200 (0.048341) | 0.089579 / 0.141683 (-0.052103) | 1.466755 / 1.452155 (0.014600) | 1.522120 / 1.492716 (0.029404) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.193172 / 0.018006 (0.175166) | 0.408878 / 0.000490 (0.408389) | 0.001586 / 0.000200 (0.001386) | 0.000071 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023496 / 0.037411 (-0.013915) | 0.098046 / 0.014526 (0.083520) | 0.104599 / 0.176557 (-0.071957) | 0.139054 / 0.737135 (-0.598081) | 0.111163 / 0.296338 (-0.185175) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417374 / 0.215209 (0.202165) | 4.145808 / 2.077655 (2.068153) | 1.847101 / 1.504120 (0.342981) | 1.637207 / 1.541195 (0.096012) | 1.676906 / 1.468490 (0.208416) | 0.689851 / 4.584777 (-3.894926) | 3.402099 / 3.745712 (-0.343614) | 1.896808 / 5.269862 (-3.373054) | 1.257876 / 4.565676 (-3.307801) | 0.081744 / 0.424275 (-0.342531) | 0.012206 / 0.007607 (0.004599) | 0.524830 / 0.226044 (0.298786) | 5.251344 / 2.268929 (2.982416) | 2.277907 / 55.444624 (-53.166717) | 1.933985 / 6.876477 (-4.942491) | 2.038500 / 2.142072 (-0.103573) | 0.808696 / 4.805227 (-3.996532) | 0.149488 / 6.500664 (-6.351176) | 0.065323 / 0.075469 (-0.010146) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.204294 / 1.841788 (-0.637493) | 13.696526 / 8.074308 (5.622218) | 13.947195 / 10.191392 (3.755802) | 0.136812 / 0.680424 (-0.543611) | 0.028625 / 0.534201 (-0.505576) | 0.397662 / 0.579283 (-0.181621) | 0.403423 / 0.434364 (-0.030941) | 0.465288 / 0.540337 (-0.075049) | 0.551919 / 1.386936 (-0.835017) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006467 / 0.011353 (-0.004886) | 0.004562 / 0.011008 (-0.006447) | 0.097514 / 0.038508 (0.059006) | 0.027471 / 0.023109 (0.004362) | 0.425504 / 0.275898 (0.149606) | 0.458856 / 0.323480 (0.135376) | 0.004816 / 0.007986 (-0.003169) | 0.003264 / 0.004328 (-0.001065) | 0.074947 / 0.004250 (0.070697) | 0.037147 / 0.037052 (0.000095) | 0.429513 / 0.258489 (0.171024) | 0.463971 / 0.293841 (0.170130) | 0.031638 / 0.128546 (-0.096908) | 0.011545 / 0.075646 (-0.064101) | 0.320261 / 0.419271 (-0.099010) | 0.041570 / 0.043533 (-0.001963) | 0.424809 / 0.255139 (0.169670) | 0.447158 / 0.283200 (0.163959) | 0.088418 / 0.141683 (-0.053265) | 1.492242 / 1.452155 (0.040087) | 1.545523 / 1.492716 (0.052807) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.217865 / 0.018006 (0.199859) | 0.399925 / 0.000490 (0.399436) | 0.004853 / 0.000200 (0.004653) | 0.000073 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024275 / 0.037411 (-0.013137) | 0.098249 / 0.014526 (0.083723) | 0.107110 / 0.176557 (-0.069446) | 0.143870 / 0.737135 (-0.593265) | 0.108796 / 0.296338 (-0.187542) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.470856 / 0.215209 (0.255647) | 4.687921 / 2.077655 (2.610266) | 2.448631 / 1.504120 (0.944511) | 2.247748 / 1.541195 (0.706553) | 2.287713 / 1.468490 (0.819223) | 0.687534 / 4.584777 (-3.897243) | 3.421099 / 3.745712 (-0.324613) | 2.977280 / 5.269862 (-2.292582) | 1.274837 / 4.565676 (-3.290839) | 0.081611 / 0.424275 (-0.342664) | 0.012603 / 0.007607 (0.004996) | 0.574600 / 0.226044 (0.348556) | 5.802826 / 2.268929 (3.533898) | 2.913178 / 55.444624 (-52.531446) | 2.589486 / 6.876477 (-4.286991) | 2.630004 / 2.142072 (0.487932) | 0.790087 / 4.805227 (-4.015140) | 0.150019 / 6.500664 (-6.350645) | 0.067346 / 0.075469 (-0.008123) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.266521 / 1.841788 (-0.575267) | 13.818770 / 8.074308 (5.744462) | 13.872277 / 10.191392 (3.680885) | 0.147375 / 0.680424 (-0.533049) | 0.016837 / 0.534201 (-0.517363) | 0.376421 / 0.579283 (-0.202862) | 0.400236 / 0.434364 (-0.034128) | 0.436623 / 0.540337 (-0.103714) | 0.527173 / 1.386936 (-0.859763) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009341 / 0.011353 (-0.002012) | 0.005188 / 0.011008 (-0.005820) | 0.101831 / 0.038508 (0.063323) | 0.035141 / 0.023109 (0.012032) | 0.299324 / 0.275898 (0.023426) | 0.334749 / 0.323480 (0.011269) | 0.007958 / 0.007986 (-0.000027) | 0.005482 / 0.004328 (0.001153) | 0.077070 / 0.004250 (0.072820) | 0.044733 / 0.037052 (0.007680) | 0.310398 / 0.258489 (0.051909) | 0.347925 / 0.293841 (0.054084) | 0.038141 / 0.128546 (-0.090405) | 0.012135 / 0.075646 (-0.063512) | 0.333799 / 0.419271 (-0.085472) | 0.048881 / 0.043533 (0.005348) | 0.301336 / 0.255139 (0.046197) | 0.314592 / 0.283200 (0.031393) | 0.103635 / 0.141683 (-0.038048) | 1.437321 / 1.452155 (-0.014833) | 1.598781 / 1.492716 (0.106065) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.248911 / 0.018006 (0.230905) | 0.528932 / 0.000490 (0.528442) | 0.002495 / 0.000200 (0.002295) | 0.000094 / 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027903 / 0.037411 (-0.009509) | 0.106716 / 0.014526 (0.092190) | 0.122650 / 0.176557 (-0.053907) | 0.162481 / 0.737135 (-0.574654) | 0.126402 / 0.296338 (-0.169937) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.352819 / 0.215209 (0.137610) | 3.522761 / 2.077655 (1.445106) | 1.576761 / 1.504120 (0.072641) | 1.411631 / 1.541195 (-0.129563) | 1.449689 / 1.468490 (-0.018801) | 0.608987 / 4.584777 (-3.975790) | 3.705121 / 3.745712 (-0.040592) | 2.085071 / 5.269862 (-3.184790) | 1.308653 / 4.565676 (-3.257024) | 0.083763 / 0.424275 (-0.340512) | 0.011957 / 0.007607 (0.004350) | 0.502182 / 0.226044 (0.276137) | 5.008829 / 2.268929 (2.739900) | 2.244687 / 55.444624 (-53.199937) | 1.891411 / 6.876477 (-4.985065) | 1.940789 / 2.142072 (-0.201284) | 0.825966 / 4.805227 (-3.979261) | 0.165267 / 6.500664 (-6.335397) | 0.063020 / 0.075469 (-0.012449) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.196707 / 1.841788 (-0.645081) | 14.236877 / 8.074308 (6.162569) | 14.872954 / 10.191392 (4.681562) | 0.168560 / 0.680424 (-0.511864) | 0.029038 / 0.534201 (-0.505163) | 0.440192 / 0.579283 (-0.139091) | 0.437021 / 0.434364 (0.002657) | 0.519612 / 0.540337 (-0.020725) | 0.612013 / 1.386936 (-0.774923) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007170 / 0.011353 (-0.004183) | 0.005303 / 0.011008 (-0.005705) | 0.098503 / 0.038508 (0.059995) | 0.032573 / 0.023109 (0.009463) | 0.398203 / 0.275898 (0.122305) | 0.446075 / 0.323480 (0.122595) | 0.005712 / 0.007986 (-0.002274) | 0.004165 / 0.004328 (-0.000164) | 0.074273 / 0.004250 (0.070023) | 0.049587 / 0.037052 (0.012534) | 0.399458 / 0.258489 (0.140969) | 0.459167 / 0.293841 (0.165327) | 0.036063 / 0.128546 (-0.092483) | 0.012394 / 0.075646 (-0.063253) | 0.332559 / 0.419271 (-0.086713) | 0.048499 / 0.043533 (0.004967) | 0.404044 / 0.255139 (0.148905) | 0.410462 / 0.283200 (0.127262) | 0.104104 / 0.141683 (-0.037579) | 1.488141 / 1.452155 (0.035986) | 1.535517 / 1.492716 (0.042801) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.292976 / 0.018006 (0.274970) | 0.569139 / 0.000490 (0.568649) | 0.000553 / 0.000200 (0.000353) | 0.000063 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030144 / 0.037411 (-0.007267) | 0.098699 / 0.014526 (0.084173) | 0.114437 / 0.176557 (-0.062120) | 0.156657 / 0.737135 (-0.580478) | 0.117449 / 0.296338 (-0.178890) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.441921 / 0.215209 (0.226712) | 4.413090 / 2.077655 (2.335435) | 2.190458 / 1.504120 (0.686338) | 2.008919 / 1.541195 (0.467724) | 2.049657 / 1.468490 (0.581167) | 0.691751 / 4.584777 (-3.893026) | 3.767524 / 3.745712 (0.021812) | 3.395564 / 5.269862 (-1.874297) | 1.633480 / 4.565676 (-2.932196) | 0.084880 / 0.424275 (-0.339395) | 0.012133 / 0.007607 (0.004526) | 0.555372 / 0.226044 (0.329327) | 5.522820 / 2.268929 (3.253892) | 2.723331 / 55.444624 (-52.721293) | 2.337583 / 6.876477 (-4.538894) | 2.368746 / 2.142072 (0.226674) | 0.830127 / 4.805227 (-3.975100) | 0.166239 / 6.500664 (-6.334425) | 0.064279 / 0.075469 (-0.011190) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.123421 / 1.841788 (-0.718367) | 14.413392 / 8.074308 (6.339084) | 12.865143 / 10.191392 (2.673751) | 0.132198 / 0.680424 (-0.548226) | 0.016138 / 0.534201 (-0.518063) | 0.380760 / 0.579283 (-0.198523) | 0.387223 / 0.434364 (-0.047141) | 0.445574 / 0.540337 (-0.094764) | 0.535658 / 1.386936 (-0.851278) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008316 / 0.011353 (-0.003037) | 0.004503 / 0.011008 (-0.006505) | 0.100565 / 0.038508 (0.062057) | 0.030388 / 0.023109 (0.007279) | 0.304417 / 0.275898 (0.028519) | 0.369655 / 0.323480 (0.046175) | 0.007796 / 0.007986 (-0.000190) | 0.003450 / 0.004328 (-0.000878) | 0.078694 / 0.004250 (0.074443) | 0.038068 / 0.037052 (0.001016) | 0.316353 / 0.258489 (0.057864) | 0.352344 / 0.293841 (0.058503) | 0.033271 / 0.128546 (-0.095276) | 0.011427 / 0.075646 (-0.064220) | 0.322367 / 0.419271 (-0.096904) | 0.041497 / 0.043533 (-0.002036) | 0.305876 / 0.255139 (0.050737) | 0.332279 / 0.283200 (0.049079) | 0.086719 / 0.141683 (-0.054964) | 1.488367 / 1.452155 (0.036212) | 1.528943 / 1.492716 (0.036227) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.171072 / 0.018006 (0.153066) | 0.421048 / 0.000490 (0.420558) | 0.003622 / 0.000200 (0.003422) | 0.000075 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022632 / 0.037411 (-0.014779) | 0.095304 / 0.014526 (0.080778) | 0.106254 / 0.176557 (-0.070302) | 0.138437 / 0.737135 (-0.598698) | 0.107258 / 0.296338 (-0.189080) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.423201 / 0.215209 (0.207992) | 4.208397 / 2.077655 (2.130742) | 1.899800 / 1.504120 (0.395680) | 1.682782 / 1.541195 (0.141587) | 1.708840 / 1.468490 (0.240350) | 0.694492 / 4.584777 (-3.890285) | 3.380369 / 3.745712 (-0.365344) | 1.851731 / 5.269862 (-3.418130) | 1.151615 / 4.565676 (-3.414061) | 0.082446 / 0.424275 (-0.341829) | 0.012483 / 0.007607 (0.004876) | 0.533688 / 0.226044 (0.307643) | 5.373434 / 2.268929 (3.104505) | 2.346403 / 55.444624 (-53.098221) | 1.978505 / 6.876477 (-4.897971) | 2.005875 / 2.142072 (-0.136198) | 0.820785 / 4.805227 (-3.984442) | 0.150728 / 6.500664 (-6.349936) | 0.065761 / 0.075469 (-0.009708) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.244550 / 1.841788 (-0.597237) | 13.219096 / 8.074308 (5.144788) | 13.960463 / 10.191392 (3.769071) | 0.135572 / 0.680424 (-0.544852) | 0.028746 / 0.534201 (-0.505455) | 0.393082 / 0.579283 (-0.186201) | 0.402852 / 0.434364 (-0.031512) | 0.461191 / 0.540337 (-0.079147) | 0.543500 / 1.386936 (-0.843436) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006316 / 0.011353 (-0.005037) | 0.004394 / 0.011008 (-0.006615) | 0.096478 / 0.038508 (0.057970) | 0.026965 / 0.023109 (0.003855) | 0.340371 / 0.275898 (0.064473) | 0.368334 / 0.323480 (0.044854) | 0.004744 / 0.007986 (-0.003242) | 0.004652 / 0.004328 (0.000324) | 0.074479 / 0.004250 (0.070228) | 0.036358 / 0.037052 (-0.000694) | 0.342968 / 0.258489 (0.084479) | 0.383675 / 0.293841 (0.089834) | 0.031439 / 0.128546 (-0.097107) | 0.011529 / 0.075646 (-0.064117) | 0.319560 / 0.419271 (-0.099711) | 0.041370 / 0.043533 (-0.002163) | 0.342594 / 0.255139 (0.087455) | 0.363237 / 0.283200 (0.080038) | 0.087316 / 0.141683 (-0.054367) | 1.468690 / 1.452155 (0.016535) | 1.553974 / 1.492716 (0.061257) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.198366 / 0.018006 (0.180360) | 0.401581 / 0.000490 (0.401091) | 0.000400 / 0.000200 (0.000200) | 0.000059 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023150 / 0.037411 (-0.014261) | 0.097797 / 0.014526 (0.083271) | 0.106198 / 0.176557 (-0.070359) | 0.139599 / 0.737135 (-0.597536) | 0.108361 / 0.296338 (-0.187978) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.472962 / 0.215209 (0.257753) | 4.702688 / 2.077655 (2.625033) | 2.401002 / 1.504120 (0.896882) | 2.193857 / 1.541195 (0.652663) | 2.219188 / 1.468490 (0.750697) | 0.689993 / 4.584777 (-3.894784) | 3.369409 / 3.745712 (-0.376304) | 1.824801 / 5.269862 (-3.445061) | 1.150815 / 4.565676 (-3.414862) | 0.082197 / 0.424275 (-0.342078) | 0.012287 / 0.007607 (0.004679) | 0.581963 / 0.226044 (0.355918) | 5.786943 / 2.268929 (3.518015) | 2.871235 / 55.444624 (-52.573389) | 2.516009 / 6.876477 (-4.360468) | 2.535669 / 2.142072 (0.393597) | 0.804733 / 4.805227 (-4.000494) | 0.150545 / 6.500664 (-6.350119) | 0.066964 / 0.075469 (-0.008505) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.285431 / 1.841788 (-0.556356) | 14.097108 / 8.074308 (6.022800) | 13.821497 / 10.191392 (3.630105) | 0.141922 / 0.680424 (-0.538502) | 0.016964 / 0.534201 (-0.517237) | 0.374784 / 0.579283 (-0.204500) | 0.381034 / 0.434364 (-0.053330) | 0.435487 / 0.540337 (-0.104850) | 0.521894 / 1.386936 (-0.865042) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009486 / 0.011353 (-0.001867) | 0.005363 / 0.011008 (-0.005645) | 0.101008 / 0.038508 (0.062500) | 0.036355 / 0.023109 (0.013246) | 0.290575 / 0.275898 (0.014677) | 0.391634 / 0.323480 (0.068154) | 0.009085 / 0.007986 (0.001099) | 0.005780 / 0.004328 (0.001451) | 0.077848 / 0.004250 (0.073598) | 0.049062 / 0.037052 (0.012009) | 0.310900 / 0.258489 (0.052411) | 0.358224 / 0.293841 (0.064383) | 0.038838 / 0.128546 (-0.089708) | 0.012244 / 0.075646 (-0.063402) | 0.333701 / 0.419271 (-0.085570) | 0.048021 / 0.043533 (0.004488) | 0.289584 / 0.255139 (0.034445) | 0.317556 / 0.283200 (0.034356) | 0.109807 / 0.141683 (-0.031876) | 1.465966 / 1.452155 (0.013811) | 1.526341 / 1.492716 (0.033625) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.246221 / 0.018006 (0.228215) | 0.580659 / 0.000490 (0.580169) | 0.000627 / 0.000200 (0.000427) | 0.000089 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028352 / 0.037411 (-0.009059) | 0.110569 / 0.014526 (0.096043) | 0.126456 / 0.176557 (-0.050100) | 0.163633 / 0.737135 (-0.573503) | 0.128252 / 0.296338 (-0.168087) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.397271 / 0.215209 (0.182062) | 3.975336 / 2.077655 (1.897682) | 1.786957 / 1.504120 (0.282837) | 1.598468 / 1.541195 (0.057273) | 1.645299 / 1.468490 (0.176809) | 0.686221 / 4.584777 (-3.898556) | 3.753184 / 3.745712 (0.007472) | 2.089505 / 5.269862 (-3.180356) | 1.325799 / 4.565676 (-3.239878) | 0.084608 / 0.424275 (-0.339667) | 0.012343 / 0.007607 (0.004736) | 0.509951 / 0.226044 (0.283907) | 5.092102 / 2.268929 (2.823174) | 2.297551 / 55.444624 (-53.147073) | 1.938177 / 6.876477 (-4.938300) | 2.012448 / 2.142072 (-0.129625) | 0.835206 / 4.805227 (-3.970021) | 0.166373 / 6.500664 (-6.334291) | 0.063996 / 0.075469 (-0.011473) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.212936 / 1.841788 (-0.628851) | 15.067370 / 8.074308 (6.993062) | 14.165214 / 10.191392 (3.973822) | 0.157041 / 0.680424 (-0.523383) | 0.029612 / 0.534201 (-0.504589) | 0.440006 / 0.579283 (-0.139277) | 0.439165 / 0.434364 (0.004801) | 0.524970 / 0.540337 (-0.015368) | 0.608305 / 1.386936 (-0.778631) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007433 / 0.011353 (-0.003920) | 0.005310 / 0.011008 (-0.005698) | 0.097194 / 0.038508 (0.058686) | 0.033265 / 0.023109 (0.010156) | 0.369908 / 0.275898 (0.094010) | 0.411508 / 0.323480 (0.088028) | 0.006000 / 0.007986 (-0.001986) | 0.005647 / 0.004328 (0.001319) | 0.075597 / 0.004250 (0.071347) | 0.051951 / 0.037052 (0.014899) | 0.378469 / 0.258489 (0.119980) | 0.424849 / 0.293841 (0.131008) | 0.036700 / 0.128546 (-0.091846) | 0.012535 / 0.075646 (-0.063111) | 0.333197 / 0.419271 (-0.086074) | 0.049046 / 0.043533 (0.005513) | 0.381845 / 0.255139 (0.126706) | 0.397846 / 0.283200 (0.114646) | 0.109152 / 0.141683 (-0.032531) | 1.432407 / 1.452155 (-0.019748) | 1.555509 / 1.492716 (0.062793) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.265433 / 0.018006 (0.247427) | 0.559590 / 0.000490 (0.559100) | 0.000492 / 0.000200 (0.000292) | 0.000060 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029748 / 0.037411 (-0.007663) | 0.110490 / 0.014526 (0.095964) | 0.124125 / 0.176557 (-0.052431) | 0.160089 / 0.737135 (-0.577046) | 0.128755 / 0.296338 (-0.167583) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.443976 / 0.215209 (0.228767) | 4.416960 / 2.077655 (2.339305) | 2.239408 / 1.504120 (0.735288) | 2.055341 / 1.541195 (0.514147) | 2.093479 / 1.468490 (0.624988) | 0.688846 / 4.584777 (-3.895930) | 3.797526 / 3.745712 (0.051814) | 3.578137 / 5.269862 (-1.691725) | 2.015073 / 4.565676 (-2.550603) | 0.084126 / 0.424275 (-0.340149) | 0.012581 / 0.007607 (0.004974) | 0.549774 / 0.226044 (0.323730) | 5.492185 / 2.268929 (3.223256) | 2.739851 / 55.444624 (-52.704773) | 2.371091 / 6.876477 (-4.505386) | 2.400178 / 2.142072 (0.258105) | 0.831227 / 4.805227 (-3.974001) | 0.166156 / 6.500664 (-6.334508) | 0.063901 / 0.075469 (-0.011568) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.236127 / 1.841788 (-0.605660) | 15.236884 / 8.074308 (7.162576) | 14.434351 / 10.191392 (4.242959) | 0.163725 / 0.680424 (-0.516699) | 0.018009 / 0.534201 (-0.516192) | 0.430612 / 0.579283 (-0.148671) | 0.420426 / 0.434364 (-0.013938) | 0.497062 / 0.540337 (-0.043275) | 0.590924 / 1.386936 (-0.796012) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010862 / 0.011353 (-0.000491) | 0.005741 / 0.011008 (-0.005267) | 0.111911 / 0.038508 (0.073403) | 0.042316 / 0.023109 (0.019207) | 0.347665 / 0.275898 (0.071767) | 0.377335 / 0.323480 (0.053855) | 0.009400 / 0.007986 (0.001414) | 0.006814 / 0.004328 (0.002486) | 0.087194 / 0.004250 (0.082943) | 0.046878 / 0.037052 (0.009826) | 0.348920 / 0.258489 (0.090430) | 0.393347 / 0.293841 (0.099507) | 0.044212 / 0.128546 (-0.084334) | 0.013925 / 0.075646 (-0.061722) | 0.386076 / 0.419271 (-0.033195) | 0.054195 / 0.043533 (0.010662) | 0.358486 / 0.255139 (0.103347) | 0.360132 / 0.283200 (0.076932) | 0.109783 / 0.141683 (-0.031900) | 1.679875 / 1.452155 (0.227720) | 1.794379 / 1.492716 (0.301663) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221927 / 0.018006 (0.203921) | 0.487352 / 0.000490 (0.486863) | 0.003494 / 0.000200 (0.003294) | 0.000091 / 0.000054 (0.000037) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032201 / 0.037411 (-0.005210) | 0.125861 / 0.014526 (0.111335) | 0.133905 / 0.176557 (-0.042652) | 0.183319 / 0.737135 (-0.553817) | 0.142646 / 0.296338 (-0.153693) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.442720 / 0.215209 (0.227511) | 4.602619 / 2.077655 (2.524964) | 2.050214 / 1.504120 (0.546094) | 1.837968 / 1.541195 (0.296773) | 1.961199 / 1.468490 (0.492709) | 0.793426 / 4.584777 (-3.791351) | 4.472078 / 3.745712 (0.726366) | 2.364903 / 5.269862 (-2.904959) | 1.515076 / 4.565676 (-3.050600) | 0.103087 / 0.424275 (-0.321188) | 0.014676 / 0.007607 (0.007068) | 0.576887 / 0.226044 (0.350843) | 5.785525 / 2.268929 (3.516596) | 2.765231 / 55.444624 (-52.679393) | 2.365364 / 6.876477 (-4.511113) | 2.448335 / 2.142072 (0.306262) | 0.978726 / 4.805227 (-3.826501) | 0.191417 / 6.500664 (-6.309247) | 0.073295 / 0.075469 (-0.002174) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.378995 / 1.841788 (-0.462792) | 16.583655 / 8.074308 (8.509347) | 14.944731 / 10.191392 (4.753339) | 0.168916 / 0.680424 (-0.511508) | 0.035272 / 0.534201 (-0.498928) | 0.489729 / 0.579283 (-0.089554) | 0.496231 / 0.434364 (0.061867) | 0.576218 / 0.540337 (0.035880) | 0.673558 / 1.386936 (-0.713378) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008104 / 0.011353 (-0.003249) | 0.005179 / 0.011008 (-0.005829) | 0.103908 / 0.038508 (0.065400) | 0.034661 / 0.023109 (0.011552) | 0.398119 / 0.275898 (0.122221) | 0.411765 / 0.323480 (0.088286) | 0.006016 / 0.007986 (-0.001970) | 0.005637 / 0.004328 (0.001308) | 0.073662 / 0.004250 (0.069412) | 0.052411 / 0.037052 (0.015359) | 0.391826 / 0.258489 (0.133337) | 0.455217 / 0.293841 (0.161376) | 0.039924 / 0.128546 (-0.088622) | 0.013390 / 0.075646 (-0.062256) | 0.390319 / 0.419271 (-0.028953) | 0.054312 / 0.043533 (0.010779) | 0.395492 / 0.255139 (0.140353) | 0.446324 / 0.283200 (0.163124) | 0.116461 / 0.141683 (-0.025222) | 1.502163 / 1.452155 (0.050008) | 1.731541 / 1.492716 (0.238825) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.282612 / 0.018006 (0.264606) | 0.503170 / 0.000490 (0.502680) | 0.005307 / 0.000200 (0.005107) | 0.000100 / 0.000054 (0.000046) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029071 / 0.037411 (-0.008340) | 0.123831 / 0.014526 (0.109306) | 0.133284 / 0.176557 (-0.043272) | 0.172029 / 0.737135 (-0.565106) | 0.140639 / 0.296338 (-0.155700) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.496812 / 0.215209 (0.281603) | 4.958915 / 2.077655 (2.881260) | 2.559188 / 1.504120 (1.055068) | 2.262434 / 1.541195 (0.721240) | 2.371126 / 1.468490 (0.902636) | 0.780150 / 4.584777 (-3.804627) | 4.417060 / 3.745712 (0.671348) | 2.401909 / 5.269862 (-2.867953) | 1.527943 / 4.565676 (-3.037733) | 0.100074 / 0.424275 (-0.324201) | 0.014853 / 0.007607 (0.007246) | 0.630192 / 0.226044 (0.404147) | 6.409685 / 2.268929 (4.140757) | 3.224718 / 55.444624 (-52.219906) | 2.795301 / 6.876477 (-4.081176) | 2.927205 / 2.142072 (0.785132) | 0.989537 / 4.805227 (-3.815690) | 0.199775 / 6.500664 (-6.300889) | 0.076725 / 0.075469 (0.001256) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.433504 / 1.841788 (-0.408284) | 17.117134 / 8.074308 (9.042825) | 16.606367 / 10.191392 (6.414975) | 0.165653 / 0.680424 (-0.514771) | 0.020818 / 0.534201 (-0.513383) | 0.496782 / 0.579283 (-0.082501) | 0.473895 / 0.434364 (0.039531) | 0.576796 / 0.540337 (0.036459) | 0.703272 / 1.386936 (-0.683664) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012501 / 0.011353 (0.001148) | 0.006437 / 0.011008 (-0.004571) | 0.129387 / 0.038508 (0.090878) | 0.035847 / 0.023109 (0.012737) | 0.339243 / 0.275898 (0.063345) | 0.423274 / 0.323480 (0.099794) | 0.008489 / 0.007986 (0.000503) | 0.004596 / 0.004328 (0.000268) | 0.103322 / 0.004250 (0.099071) | 0.043570 / 0.037052 (0.006517) | 0.357004 / 0.258489 (0.098515) | 0.426511 / 0.293841 (0.132670) | 0.062923 / 0.128546 (-0.065623) | 0.021168 / 0.075646 (-0.054478) | 0.387485 / 0.419271 (-0.031787) | 0.059745 / 0.043533 (0.016213) | 0.341101 / 0.255139 (0.085962) | 0.365530 / 0.283200 (0.082331) | 0.102110 / 0.141683 (-0.039573) | 1.729408 / 1.452155 (0.277253) | 1.759510 / 1.492716 (0.266794) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.187065 / 0.018006 (0.169059) | 0.499685 / 0.000490 (0.499196) | 0.004677 / 0.000200 (0.004478) | 0.000120 / 0.000054 (0.000065) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025827 / 0.037411 (-0.011584) | 0.113780 / 0.014526 (0.099255) | 0.146060 / 0.176557 (-0.030496) | 0.158169 / 0.737135 (-0.578966) | 0.136133 / 0.296338 (-0.160206) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.608421 / 0.215209 (0.393211) | 5.907395 / 2.077655 (3.829741) | 2.193140 / 1.504120 (0.689021) | 1.870315 / 1.541195 (0.329120) | 1.885660 / 1.468490 (0.417170) | 1.227637 / 4.584777 (-3.357140) | 5.319242 / 3.745712 (1.573530) | 2.991595 / 5.269862 (-2.278267) | 2.043906 / 4.565676 (-2.521771) | 0.151829 / 0.424275 (-0.272447) | 0.018974 / 0.007607 (0.011367) | 0.778035 / 0.226044 (0.551991) | 7.705796 / 2.268929 (5.436868) | 2.990156 / 55.444624 (-52.454468) | 2.372643 / 6.876477 (-4.503834) | 2.240847 / 2.142072 (0.098775) | 1.407209 / 4.805227 (-3.398018) | 0.242336 / 6.500664 (-6.258328) | 0.069847 / 0.075469 (-0.005622) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.445817 / 1.841788 (-0.395970) | 16.059632 / 8.074308 (7.985324) | 18.541971 / 10.191392 (8.350579) | 0.237830 / 0.680424 (-0.442594) | 0.041060 / 0.534201 (-0.493141) | 0.496765 / 0.579283 (-0.082518) | 0.609666 / 0.434364 (0.175302) | 0.584614 / 0.540337 (0.044277) | 0.680858 / 1.386936 (-0.706078) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009037 / 0.011353 (-0.002315) | 0.005961 / 0.011008 (-0.005047) | 0.127204 / 0.038508 (0.088696) | 0.030664 / 0.023109 (0.007555) | 0.417968 / 0.275898 (0.142070) | 0.515316 / 0.323480 (0.191836) | 0.006549 / 0.007986 (-0.001436) | 0.004456 / 0.004328 (0.000128) | 0.083715 / 0.004250 (0.079464) | 0.043701 / 0.037052 (0.006648) | 0.521153 / 0.258489 (0.262664) | 0.565456 / 0.293841 (0.271615) | 0.055298 / 0.128546 (-0.073248) | 0.018103 / 0.075646 (-0.057544) | 0.403990 / 0.419271 (-0.015282) | 0.060162 / 0.043533 (0.016629) | 0.486383 / 0.255139 (0.231244) | 0.470342 / 0.283200 (0.187142) | 0.102269 / 0.141683 (-0.039414) | 1.643241 / 1.452155 (0.191086) | 1.763850 / 1.492716 (0.271133) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.185602 / 0.018006 (0.167596) | 0.489163 / 0.000490 (0.488674) | 0.000426 / 0.000200 (0.000226) | 0.000086 / 0.000054 (0.000031) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026689 / 0.037411 (-0.010722) | 0.111520 / 0.014526 (0.096994) | 0.119838 / 0.176557 (-0.056719) | 0.153698 / 0.737135 (-0.583437) | 0.130969 / 0.296338 (-0.165370) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.616170 / 0.215209 (0.400961) | 6.219702 / 2.077655 (4.142048) | 2.533554 / 1.504120 (1.029434) | 2.256009 / 1.541195 (0.714815) | 2.217617 / 1.468490 (0.749127) | 1.156920 / 4.584777 (-3.427857) | 5.175759 / 3.745712 (1.430046) | 2.848419 / 5.269862 (-2.421442) | 1.943864 / 4.565676 (-2.621813) | 0.138342 / 0.424275 (-0.285933) | 0.013140 / 0.007607 (0.005533) | 0.782105 / 0.226044 (0.556060) | 7.602003 / 2.268929 (5.333075) | 3.629577 / 55.444624 (-51.815047) | 2.713849 / 6.876477 (-4.162628) | 2.663888 / 2.142072 (0.521816) | 1.418381 / 4.805227 (-3.386847) | 0.250649 / 6.500664 (-6.250015) | 0.073564 / 0.075469 (-0.001905) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.483739 / 1.841788 (-0.358049) | 16.386204 / 8.074308 (8.311896) | 20.685262 / 10.191392 (10.493870) | 0.237084 / 0.680424 (-0.443340) | 0.039097 / 0.534201 (-0.495104) | 0.525399 / 0.579283 (-0.053884) | 0.587541 / 0.434364 (0.153177) | 0.566605 / 0.540337 (0.026268) | 0.677384 / 1.386936 (-0.709552) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.014050 / 0.011353 (0.002697) | 0.005981 / 0.011008 (-0.005028) | 0.126307 / 0.038508 (0.087799) | 0.035400 / 0.023109 (0.012290) | 0.387821 / 0.275898 (0.111923) | 0.462785 / 0.323480 (0.139305) | 0.009427 / 0.007986 (0.001441) | 0.005081 / 0.004328 (0.000753) | 0.097273 / 0.004250 (0.093023) | 0.044699 / 0.037052 (0.007647) | 0.396025 / 0.258489 (0.137536) | 0.450137 / 0.293841 (0.156296) | 0.055660 / 0.128546 (-0.072886) | 0.022710 / 0.075646 (-0.052936) | 0.443784 / 0.419271 (0.024513) | 0.065756 / 0.043533 (0.022223) | 0.379350 / 0.255139 (0.124211) | 0.396783 / 0.283200 (0.113583) | 0.114088 / 0.141683 (-0.027594) | 1.856834 / 1.452155 (0.404679) | 1.839292 / 1.492716 (0.346576) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.206748 / 0.018006 (0.188742) | 0.517711 / 0.000490 (0.517222) | 0.008302 / 0.000200 (0.008102) | 0.000494 / 0.000054 (0.000440) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033987 / 0.037411 (-0.003424) | 0.131067 / 0.014526 (0.116542) | 0.155539 / 0.176557 (-0.021018) | 0.188598 / 0.737135 (-0.548537) | 0.156000 / 0.296338 (-0.140338) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.641413 / 0.215209 (0.426204) | 6.156680 / 2.077655 (4.079025) | 2.428858 / 1.504120 (0.924738) | 2.086195 / 1.541195 (0.545000) | 2.109604 / 1.468490 (0.641114) | 1.209426 / 4.584777 (-3.375351) | 5.139398 / 3.745712 (1.393686) | 3.041337 / 5.269862 (-2.228524) | 2.294809 / 4.565676 (-2.270868) | 0.142206 / 0.424275 (-0.282069) | 0.015167 / 0.007607 (0.007560) | 0.816269 / 0.226044 (0.590224) | 7.953931 / 2.268929 (5.685002) | 3.201793 / 55.444624 (-52.242832) | 2.448620 / 6.876477 (-4.427857) | 2.521670 / 2.142072 (0.379597) | 1.484094 / 4.805227 (-3.321133) | 0.255069 / 6.500664 (-6.245595) | 0.076031 / 0.075469 (0.000561) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.590951 / 1.841788 (-0.250836) | 17.661353 / 8.074308 (9.587045) | 21.097837 / 10.191392 (10.906445) | 0.229265 / 0.680424 (-0.451159) | 0.042618 / 0.534201 (-0.491583) | 0.535942 / 0.579283 (-0.043342) | 0.590195 / 0.434364 (0.155831) | 0.623985 / 0.540337 (0.083648) | 0.742637 / 1.386936 (-0.644299) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009264 / 0.011353 (-0.002088) | 0.008798 / 0.011008 (-0.002210) | 0.122208 / 0.038508 (0.083700) | 0.034835 / 0.023109 (0.011726) | 0.462618 / 0.275898 (0.186720) | 0.505632 / 0.323480 (0.182152) | 0.006320 / 0.007986 (-0.001665) | 0.005383 / 0.004328 (0.001054) | 0.091229 / 0.004250 (0.086979) | 0.045828 / 0.037052 (0.008775) | 0.477507 / 0.258489 (0.219018) | 0.539616 / 0.293841 (0.245775) | 0.061913 / 0.128546 (-0.066633) | 0.019390 / 0.075646 (-0.056257) | 0.420016 / 0.419271 (0.000745) | 0.065958 / 0.043533 (0.022425) | 0.468603 / 0.255139 (0.213464) | 0.486246 / 0.283200 (0.203046) | 0.107924 / 0.141683 (-0.033759) | 1.843614 / 1.452155 (0.391459) | 1.988159 / 1.492716 (0.495442) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.247043 / 0.018006 (0.229037) | 0.515580 / 0.000490 (0.515090) | 0.005630 / 0.000200 (0.005430) | 0.000115 / 0.000054 (0.000060) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030674 / 0.037411 (-0.006737) | 0.130783 / 0.014526 (0.116258) | 0.147669 / 0.176557 (-0.028888) | 0.175656 / 0.737135 (-0.561479) | 0.138317 / 0.296338 (-0.158022) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.727119 / 0.215209 (0.511909) | 6.848208 / 2.077655 (4.770553) | 3.121418 / 1.504120 (1.617298) | 2.701799 / 1.541195 (1.160604) | 2.749179 / 1.468490 (1.280689) | 1.312058 / 4.584777 (-3.272719) | 5.400562 / 3.745712 (1.654850) | 3.058142 / 5.269862 (-2.211719) | 2.076361 / 4.565676 (-2.489316) | 0.142169 / 0.424275 (-0.282106) | 0.014340 / 0.007607 (0.006733) | 0.853534 / 0.226044 (0.627490) | 8.734484 / 2.268929 (6.465556) | 3.968130 / 55.444624 (-51.476495) | 3.118032 / 6.876477 (-3.758444) | 3.078757 / 2.142072 (0.936684) | 1.460694 / 4.805227 (-3.344533) | 0.261858 / 6.500664 (-6.238806) | 0.081089 / 0.075469 (0.005620) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.611473 / 1.841788 (-0.230315) | 17.660545 / 8.074308 (9.586237) | 20.526023 / 10.191392 (10.334631) | 0.223320 / 0.680424 (-0.457103) | 0.027939 / 0.534201 (-0.506261) | 0.542704 / 0.579283 (-0.036579) | 0.563826 / 0.434364 (0.129462) | 0.639936 / 0.540337 (0.099599) | 0.755974 / 1.386936 (-0.630962) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008776 / 0.011353 (-0.002577) | 0.004532 / 0.011008 (-0.006476) | 0.100373 / 0.038508 (0.061865) | 0.029706 / 0.023109 (0.006597) | 0.304374 / 0.275898 (0.028476) | 0.337223 / 0.323480 (0.013743) | 0.007021 / 0.007986 (-0.000965) | 0.003420 / 0.004328 (-0.000908) | 0.077754 / 0.004250 (0.073504) | 0.034411 / 0.037052 (-0.002642) | 0.302926 / 0.258489 (0.044437) | 0.342654 / 0.293841 (0.048813) | 0.034528 / 0.128546 (-0.094018) | 0.011926 / 0.075646 (-0.063721) | 0.322971 / 0.419271 (-0.096301) | 0.041384 / 0.043533 (-0.002149) | 0.306433 / 0.255139 (0.051294) | 0.332293 / 0.283200 (0.049093) | 0.084972 / 0.141683 (-0.056711) | 1.493426 / 1.452155 (0.041271) | 1.570446 / 1.492716 (0.077729) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.189090 / 0.018006 (0.171084) | 0.433904 / 0.000490 (0.433414) | 0.001323 / 0.000200 (0.001124) | 0.000073 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023531 / 0.037411 (-0.013880) | 0.097774 / 0.014526 (0.083248) | 0.106383 / 0.176557 (-0.070174) | 0.139158 / 0.737135 (-0.597977) | 0.109443 / 0.296338 (-0.186896) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.419078 / 0.215209 (0.203869) | 4.182657 / 2.077655 (2.105002) | 1.887276 / 1.504120 (0.383156) | 1.679542 / 1.541195 (0.138347) | 1.718035 / 1.468490 (0.249545) | 0.692628 / 4.584777 (-3.892149) | 3.361354 / 3.745712 (-0.384358) | 1.928583 / 5.269862 (-3.341278) | 1.317291 / 4.565676 (-3.248386) | 0.081799 / 0.424275 (-0.342476) | 0.012318 / 0.007607 (0.004711) | 0.525927 / 0.226044 (0.299883) | 5.285905 / 2.268929 (3.016977) | 2.317524 / 55.444624 (-53.127100) | 1.966478 / 6.876477 (-4.909998) | 2.054869 / 2.142072 (-0.087204) | 0.807579 / 4.805227 (-3.997649) | 0.149854 / 6.500664 (-6.350810) | 0.065285 / 0.075469 (-0.010184) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.180516 / 1.841788 (-0.661271) | 13.889734 / 8.074308 (5.815426) | 14.076163 / 10.191392 (3.884771) | 0.156276 / 0.680424 (-0.524148) | 0.029187 / 0.534201 (-0.505013) | 0.403859 / 0.579283 (-0.175424) | 0.404998 / 0.434364 (-0.029366) | 0.471467 / 0.540337 (-0.068871) | 0.564526 / 1.386936 (-0.822410) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006739 / 0.011353 (-0.004614) | 0.004644 / 0.011008 (-0.006364) | 0.097326 / 0.038508 (0.058818) | 0.027728 / 0.023109 (0.004619) | 0.413537 / 0.275898 (0.137639) | 0.452012 / 0.323480 (0.128532) | 0.005346 / 0.007986 (-0.002639) | 0.003338 / 0.004328 (-0.000991) | 0.075670 / 0.004250 (0.071420) | 0.038825 / 0.037052 (0.001772) | 0.415612 / 0.258489 (0.157123) | 0.454680 / 0.293841 (0.160839) | 0.031866 / 0.128546 (-0.096680) | 0.011616 / 0.075646 (-0.064031) | 0.319527 / 0.419271 (-0.099745) | 0.041283 / 0.043533 (-0.002250) | 0.412046 / 0.255139 (0.156907) | 0.435244 / 0.283200 (0.152044) | 0.088400 / 0.141683 (-0.053283) | 1.478125 / 1.452155 (0.025970) | 1.553677 / 1.492716 (0.060960) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229919 / 0.018006 (0.211913) | 0.415446 / 0.000490 (0.414956) | 0.000386 / 0.000200 (0.000186) | 0.000058 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024365 / 0.037411 (-0.013046) | 0.098225 / 0.014526 (0.083699) | 0.106674 / 0.176557 (-0.069883) | 0.144755 / 0.737135 (-0.592380) | 0.109221 / 0.296338 (-0.187117) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.457665 / 0.215209 (0.242456) | 4.597849 / 2.077655 (2.520195) | 2.171275 / 1.504120 (0.667155) | 1.945547 / 1.541195 (0.404352) | 2.014043 / 1.468490 (0.545553) | 0.699732 / 4.584777 (-3.885045) | 3.420711 / 3.745712 (-0.325001) | 3.298702 / 5.269862 (-1.971159) | 1.390324 / 4.565676 (-3.175353) | 0.082668 / 0.424275 (-0.341607) | 0.012556 / 0.007607 (0.004949) | 0.550406 / 0.226044 (0.324361) | 5.501060 / 2.268929 (3.232132) | 2.659841 / 55.444624 (-52.784783) | 2.243443 / 6.876477 (-4.633034) | 2.266006 / 2.142072 (0.123934) | 0.806295 / 4.805227 (-3.998933) | 0.151399 / 6.500664 (-6.349265) | 0.067048 / 0.075469 (-0.008421) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.291404 / 1.841788 (-0.550384) | 14.164728 / 8.074308 (6.090419) | 13.980219 / 10.191392 (3.788827) | 0.140599 / 0.680424 (-0.539824) | 0.016880 / 0.534201 (-0.517321) | 0.379073 / 0.579283 (-0.200210) | 0.385770 / 0.434364 (-0.048594) | 0.442516 / 0.540337 (-0.097822) | 0.533569 / 1.386936 (-0.853367) |\n\n</details>\n</details>\n\n\n",
"Tests seem to be failing for unrelated reasons.",
"Tests are failing because of a bug on the Hub side - this is being fixed :)\r\n\r\nlmk once the TF documentation page is updated and we can merge !",
"@lhoestq Docs updated!"
] | 2022-12-19T19:40:27Z
| 2023-01-25T16:28:44Z
| 2023-01-25T16:21:40Z
|
MEMBER
| null | null | null |
Hey all! Here's a first draft of the PR to add a multiprocessing implementation for `to_tf_dataset()`. It worked in some quick testing for me, but obviously I need to do some much more rigorous testing/benchmarking, and add some proper library tests.
The core idea is that we do everything using `multiprocessing` and `numpy`, and just wrap a `tf.data.Dataset` around the output. We could also rewrite the existing single-threaded implementation based on this code, which might simplify it a bit.
Checklist:
- [X] Add initial draft
- [x] Check that it works regardless of whether the `collate_fn` or dataset returns `tf` or `np` arrays
- [x] Check that it works with `tf.string` return data
- [x] Check indices are correctly reshuffled each epoch
- [x] Make sure workers don't try to initialize a GPU device!!
- [x] Check `fit()` with multiple epochs works fine and that the progress bar is correct
- [x] Check there are no memory leaks or zombie processes
- [x] Benchmark performance
- [x] Tweak params for dataset inference - can we speed things up there a bit?
- [x] Add tests to the library
- [x] Add a PR to `transformers` to expose the `num_workers` argument via `prepare_tf_dataset` (will merge after this one is released)
- [x] Stop TF console spam!! (almost)
- [x] Add a method for creating SHM that doesn't crash if it was left and still linked
- [x] Add a barrier for Py <= 3.7 because it doesn't support SharedMemory
- [x] Support string dtypes by converting them into fixed-width character arrays
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Rocketknight1",
"id": 12866554,
"login": "Rocketknight1",
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Rocketknight1",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5377/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5377/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5377.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5377",
"merged_at": "2023-01-25T16:21:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5377.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5377"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5014
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5014/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5014/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5014/events
|
https://github.com/huggingface/datasets/issues/5014
| 1,383,422,639
|
I_kwDODunzps5SdVqv
| 5,014
|
I need to read the custom dataset in conll format
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/39985245?v=4",
"events_url": "https://api.github.com/users/shell-nlp/events{/privacy}",
"followers_url": "https://api.github.com/users/shell-nlp/followers",
"following_url": "https://api.github.com/users/shell-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/shell-nlp/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shell-nlp",
"id": 39985245,
"login": "shell-nlp",
"node_id": "MDQ6VXNlcjM5OTg1MjQ1",
"organizations_url": "https://api.github.com/users/shell-nlp/orgs",
"received_events_url": "https://api.github.com/users/shell-nlp/received_events",
"repos_url": "https://api.github.com/users/shell-nlp/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shell-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shell-nlp/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shell-nlp",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[
"Hi! We don't currently have a builder for parsing custom `conll` datasets, but I guess we could add one as a packaged module (similarly to what [TFDS](https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/core/dataset_builders/conll/conll_dataset_builder.py) did). @lhoestq @albertvillanova WDYT?\r\n\r\nIn the meantime, you can use `Dataset.from_generator` to create a dataset as follows:\r\n```python\r\nfrom datasets import Dataset\r\n\r\n# 2009 version\r\nINPUT_COLUMNS = \"ID FORM LEMMA PLEMMA POS PPOS FEAT PFEAT HEAD PHEAD DEPREL PDEPREL\".split()\r\n\r\ndef read_conll(file):\r\n example = {col: [] for col in INPUT_COLUMNS}\r\n idx = 0\r\n with open(file) as f:\r\n for line in f:\r\n if line.startswith(\"-DOCSTART-\") or line == \"\\n\" or not line:\r\n if example[next(iter(example))]:\r\n yield idx, example\r\n idx += 1\r\n example = {col: [] for col in INPUT_COLUMNS}\r\n else:\r\n row_cols = line.split()\r\n for i, col in enumerate(example):\r\n example[col] = row_cols[i].rstrip()\r\n\r\n# (optional) pass custom features with `features=Features(...)`\r\ndset = Dataset.from_generator(read_conll, gen_kwargs={\"file\": \"path/to/conll/file\"}) \r\n``` ",
"I think we could add a dedicated builder if you think this format is general enough.",
"\r\n\r\n\r\n> I think we could add a dedicated builder if you think this format is general enough.\r\n\r\nI think its functions are incomplete. It should have to_ Conll and from_ There are two methods of conll."
] | 2022-09-23T07:49:42Z
| 2022-11-02T11:57:15Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
I need to read the custom dataset in conll format
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/39985245?v=4",
"events_url": "https://api.github.com/users/shell-nlp/events{/privacy}",
"followers_url": "https://api.github.com/users/shell-nlp/followers",
"following_url": "https://api.github.com/users/shell-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/shell-nlp/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shell-nlp",
"id": 39985245,
"login": "shell-nlp",
"node_id": "MDQ6VXNlcjM5OTg1MjQ1",
"organizations_url": "https://api.github.com/users/shell-nlp/orgs",
"received_events_url": "https://api.github.com/users/shell-nlp/received_events",
"repos_url": "https://api.github.com/users/shell-nlp/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shell-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shell-nlp/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shell-nlp",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5014/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5014/timeline
| null |
reopened
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6395
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6395/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6395/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6395/events
|
https://github.com/huggingface/datasets/issues/6395
| 1,986,484,124
|
I_kwDODunzps52Z1ec
| 6,395
|
Add ability to set lock type
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/37735580?v=4",
"events_url": "https://api.github.com/users/leoleoasd/events{/privacy}",
"followers_url": "https://api.github.com/users/leoleoasd/followers",
"following_url": "https://api.github.com/users/leoleoasd/following{/other_user}",
"gists_url": "https://api.github.com/users/leoleoasd/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/leoleoasd",
"id": 37735580,
"login": "leoleoasd",
"node_id": "MDQ6VXNlcjM3NzM1NTgw",
"organizations_url": "https://api.github.com/users/leoleoasd/orgs",
"received_events_url": "https://api.github.com/users/leoleoasd/received_events",
"repos_url": "https://api.github.com/users/leoleoasd/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/leoleoasd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leoleoasd/subscriptions",
"type": "User",
"url": "https://api.github.com/users/leoleoasd",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] | null |
[
"We've replaced our filelock implementation with the `filelock` package, so their repo is the right place to request this feature.\r\n\r\nIn the meantime, the following should work: \r\n```python\r\nimport filelock\r\nfilelock.FileLock = filelock.SoftFileLock\r\n\r\nimport datasets\r\n...\r\n```"
] | 2023-11-09T22:12:30Z
| 2023-11-23T18:50:00Z
| 2023-11-23T18:50:00Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
Allow setting file lock type, maybe from an environment variable
Currently, it only depends on whether fnctl is available:
https://github.com/huggingface/datasets/blob/12ebe695b4748c5a26e08b44ed51955f74f5801d/src/datasets/utils/filelock.py#L463-L470C16
### Motivation
In my environment, flock isn't supported on a network attached drive
### Your contribution
I'll be happy to submit a pr.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6395/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6395/timeline
| null |
not_planned
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5582
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5582/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5582/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5582/events
|
https://github.com/huggingface/datasets/pull/5582
| 1,600,932,092
|
PR_kwDODunzps5K0ZcN
| 5,582
|
Add column_names to IterableDataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/50772274?v=4",
"events_url": "https://api.github.com/users/patrickloeber/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickloeber/followers",
"following_url": "https://api.github.com/users/patrickloeber/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickloeber/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickloeber",
"id": 50772274,
"login": "patrickloeber",
"node_id": "MDQ6VXNlcjUwNzcyMjc0",
"organizations_url": "https://api.github.com/users/patrickloeber/orgs",
"received_events_url": "https://api.github.com/users/patrickloeber/received_events",
"repos_url": "https://api.github.com/users/patrickloeber/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickloeber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickloeber/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickloeber",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006362 / 0.011353 (-0.004991) | 0.004546 / 0.011008 (-0.006462) | 0.097003 / 0.038508 (0.058495) | 0.028007 / 0.023109 (0.004898) | 0.315097 / 0.275898 (0.039199) | 0.365128 / 0.323480 (0.041649) | 0.004819 / 0.007986 (-0.003167) | 0.003335 / 0.004328 (-0.000994) | 0.076665 / 0.004250 (0.072415) | 0.038285 / 0.037052 (0.001233) | 0.322100 / 0.258489 (0.063611) | 0.407466 / 0.293841 (0.113625) | 0.031580 / 0.128546 (-0.096966) | 0.011645 / 0.075646 (-0.064001) | 0.321789 / 0.419271 (-0.097483) | 0.051015 / 0.043533 (0.007483) | 0.331762 / 0.255139 (0.076623) | 0.369727 / 0.283200 (0.086527) | 0.090144 / 0.141683 (-0.051539) | 1.485480 / 1.452155 (0.033326) | 1.562032 / 1.492716 (0.069316) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.201192 / 0.018006 (0.183186) | 0.409760 / 0.000490 (0.409270) | 0.002220 / 0.000200 (0.002020) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022361 / 0.037411 (-0.015050) | 0.096375 / 0.014526 (0.081849) | 0.101369 / 0.176557 (-0.075188) | 0.161568 / 0.737135 (-0.575568) | 0.105094 / 0.296338 (-0.191245) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426251 / 0.215209 (0.211042) | 4.261374 / 2.077655 (2.183720) | 2.015688 / 1.504120 (0.511569) | 1.833708 / 1.541195 (0.292513) | 1.908994 / 1.468490 (0.440504) | 0.703108 / 4.584777 (-3.881669) | 3.420767 / 3.745712 (-0.324945) | 1.844776 / 5.269862 (-3.425086) | 1.158470 / 4.565676 (-3.407207) | 0.083324 / 0.424275 (-0.340951) | 0.013054 / 0.007607 (0.005447) | 0.521473 / 0.226044 (0.295429) | 5.245505 / 2.268929 (2.976576) | 2.349110 / 55.444624 (-53.095515) | 2.011119 / 6.876477 (-4.865358) | 2.217807 / 2.142072 (0.075734) | 0.808584 / 4.805227 (-3.996643) | 0.151337 / 6.500664 (-6.349327) | 0.065815 / 0.075469 (-0.009654) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.221839 / 1.841788 (-0.619949) | 13.634161 / 8.074308 (5.559853) | 13.915360 / 10.191392 (3.723968) | 0.126448 / 0.680424 (-0.553976) | 0.016614 / 0.534201 (-0.517587) | 0.379150 / 0.579283 (-0.200133) | 0.382134 / 0.434364 (-0.052230) | 0.442845 / 0.540337 (-0.097493) | 0.519578 / 1.386936 (-0.867358) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006238 / 0.011353 (-0.005115) | 0.004591 / 0.011008 (-0.006418) | 0.076652 / 0.038508 (0.038144) | 0.026882 / 0.023109 (0.003773) | 0.341948 / 0.275898 (0.066050) | 0.375244 / 0.323480 (0.051764) | 0.004770 / 0.007986 (-0.003215) | 0.004703 / 0.004328 (0.000374) | 0.075797 / 0.004250 (0.071547) | 0.035001 / 0.037052 (-0.002051) | 0.341670 / 0.258489 (0.083181) | 0.383028 / 0.293841 (0.089187) | 0.031756 / 0.128546 (-0.096791) | 0.011714 / 0.075646 (-0.063933) | 0.085552 / 0.419271 (-0.333720) | 0.047697 / 0.043533 (0.004164) | 0.340805 / 0.255139 (0.085666) | 0.365478 / 0.283200 (0.082278) | 0.093146 / 0.141683 (-0.048537) | 1.465100 / 1.452155 (0.012945) | 1.552708 / 1.492716 (0.059992) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.209117 / 0.018006 (0.191111) | 0.402622 / 0.000490 (0.402132) | 0.003940 / 0.000200 (0.003740) | 0.000078 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026027 / 0.037411 (-0.011385) | 0.098346 / 0.014526 (0.083820) | 0.107349 / 0.176557 (-0.069207) | 0.157846 / 0.737135 (-0.579289) | 0.109566 / 0.296338 (-0.186772) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.445088 / 0.215209 (0.229879) | 4.450727 / 2.077655 (2.373072) | 2.237798 / 1.504120 (0.733678) | 2.026060 / 1.541195 (0.484866) | 2.020464 / 1.468490 (0.551974) | 0.700155 / 4.584777 (-3.884622) | 3.435497 / 3.745712 (-0.310215) | 2.851970 / 5.269862 (-2.417891) | 1.512689 / 4.565676 (-3.052988) | 0.083717 / 0.424275 (-0.340558) | 0.012466 / 0.007607 (0.004859) | 0.545130 / 0.226044 (0.319085) | 5.478228 / 2.268929 (3.209300) | 2.554169 / 55.444624 (-52.890456) | 2.214703 / 6.876477 (-4.661774) | 2.229997 / 2.142072 (0.087925) | 0.809851 / 4.805227 (-3.995376) | 0.151019 / 6.500664 (-6.349645) | 0.066354 / 0.075469 (-0.009115) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.281016 / 1.841788 (-0.560772) | 14.071312 / 8.074308 (5.997004) | 14.682465 / 10.191392 (4.491073) | 0.144197 / 0.680424 (-0.536227) | 0.017088 / 0.534201 (-0.517113) | 0.379049 / 0.579283 (-0.200234) | 0.390713 / 0.434364 (-0.043650) | 0.435804 / 0.540337 (-0.104534) | 0.518895 / 1.386936 (-0.868041) |\n\n</details>\n</details>\n\n\n"
] | 2023-02-27T10:50:07Z
| 2023-03-13T19:10:22Z
| 2023-03-13T19:03:32Z
|
CONTRIBUTOR
| null | null | null |
This PR closes #5383
* Add column_names property to IterableDataset
* Add multiple tests for this new property
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5582/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5582/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5582.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5582",
"merged_at": "2023-03-13T19:03:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5582.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5582"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4669
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4669/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4669/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4669/events
|
https://github.com/huggingface/datasets/issues/4669
| 1,299,848,003
|
I_kwDODunzps5NehtD
| 4,669
|
loading oscar-corpus/OSCAR-2201 raises an error
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/33824221?v=4",
"events_url": "https://api.github.com/users/vitalyshalumov/events{/privacy}",
"followers_url": "https://api.github.com/users/vitalyshalumov/followers",
"following_url": "https://api.github.com/users/vitalyshalumov/following{/other_user}",
"gists_url": "https://api.github.com/users/vitalyshalumov/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vitalyshalumov",
"id": 33824221,
"login": "vitalyshalumov",
"node_id": "MDQ6VXNlcjMzODI0MjIx",
"organizations_url": "https://api.github.com/users/vitalyshalumov/orgs",
"received_events_url": "https://api.github.com/users/vitalyshalumov/received_events",
"repos_url": "https://api.github.com/users/vitalyshalumov/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vitalyshalumov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vitalyshalumov/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vitalyshalumov",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[
"I had to use the appropriate token for use_auth_token. Thank you."
] | 2022-07-10T07:09:30Z
| 2022-07-11T09:27:49Z
| 2022-07-11T09:27:49Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
## Describe the bug
load_dataset('oscar-2201', 'af')
raises an error:
Traceback (most recent call last):
File "/usr/lib/python3.8/code.py", line 90, in runcode
exec(code, self.locals)
File "<input>", line 1, in <module>
File "..python3.8/site-packages/datasets/load.py", line 1656, in load_dataset
builder_instance = load_dataset_builder(
File ".../lib/python3.8/site-packages/datasets/load.py", line 1439, in load_dataset_builder
dataset_module = dataset_module_factory(
File ".../lib/python3.8/site-packages/datasets/load.py", line 1189, in dataset_module_factory
raise FileNotFoundError(
FileNotFoundError: Couldn't find a dataset script at .../oscar-2201/oscar-2201.py or any data file in the same directory. Couldn't find 'oscar-2201' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/oscar-2201/oscar-2201.py
I've tried other permutations such as :
oscar_22 = load_dataset('oscar-2201', 'af',use_auth_token=True)
oscar_22 = load_dataset('oscar-corpus/OSCAR-2201', 'af',use_auth_token=True)
oscar_22 = load_dataset('oscar-2201', 'af')
oscar_22 = load_dataset('oscar-corpus/OSCAR-2201')
with the same unfortunate result.
## Steps to reproduce the bug
oscar_22 = load_dataset('oscar-2201', 'af',use_auth_token=True)
oscar_22 = load_dataset('oscar-corpus/OSCAR-2201', 'af',use_auth_token=True)
oscar_22 = load_dataset('oscar-2201', 'af')
oscar_22 = load_dataset('oscar-corpus/OSCAR-2201')
# Sample code to reproduce the bug
```
## Expected results
loaded data
## Actual results
Traceback (most recent call last):
File "/usr/lib/python3.8/code.py", line 90, in runcode
exec(code, self.locals)
File "<input>", line 1, in <module>
File "..python3.8/site-packages/datasets/load.py", line 1656, in load_dataset
builder_instance = load_dataset_builder(
File ".../lib/python3.8/site-packages/datasets/load.py", line 1439, in load_dataset_builder
dataset_module = dataset_module_factory(
File ".../lib/python3.8/site-packages/datasets/load.py", line 1189, in dataset_module_factory
raise FileNotFoundError(
FileNotFoundError: Couldn't find a dataset script at .../oscar-2201/oscar-2201.py or any data file in the same directory. Couldn't find 'oscar-2201' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/oscar-2201/oscar-2201.py
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: Linux-5.13.0-37-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/33824221?v=4",
"events_url": "https://api.github.com/users/vitalyshalumov/events{/privacy}",
"followers_url": "https://api.github.com/users/vitalyshalumov/followers",
"following_url": "https://api.github.com/users/vitalyshalumov/following{/other_user}",
"gists_url": "https://api.github.com/users/vitalyshalumov/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vitalyshalumov",
"id": 33824221,
"login": "vitalyshalumov",
"node_id": "MDQ6VXNlcjMzODI0MjIx",
"organizations_url": "https://api.github.com/users/vitalyshalumov/orgs",
"received_events_url": "https://api.github.com/users/vitalyshalumov/received_events",
"repos_url": "https://api.github.com/users/vitalyshalumov/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vitalyshalumov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vitalyshalumov/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vitalyshalumov",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4669/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4669/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6973
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6973/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6973/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6973/events
|
https://github.com/huggingface/datasets/issues/6973
| 2,355,517,362
|
I_kwDODunzps6MZley
| 6,973
|
IndexError during training with Squad dataset and T5-small model
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/151521233?v=4",
"events_url": "https://api.github.com/users/ramtunguturi36/events{/privacy}",
"followers_url": "https://api.github.com/users/ramtunguturi36/followers",
"following_url": "https://api.github.com/users/ramtunguturi36/following{/other_user}",
"gists_url": "https://api.github.com/users/ramtunguturi36/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ramtunguturi36",
"id": 151521233,
"login": "ramtunguturi36",
"node_id": "U_kgDOCQgH0Q",
"organizations_url": "https://api.github.com/users/ramtunguturi36/orgs",
"received_events_url": "https://api.github.com/users/ramtunguturi36/received_events",
"repos_url": "https://api.github.com/users/ramtunguturi36/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ramtunguturi36/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ramtunguturi36/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ramtunguturi36",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"add remove_unused_columns=False to training_args\r\nhttps://github.com/huggingface/datasets/issues/6535#issuecomment-1874024704",
"Closing this issue because it was a reported and fixed in transformers."
] | 2024-06-16T07:53:54Z
| 2024-07-01T11:25:40Z
| 2024-07-01T11:25:40Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I am encountering an IndexError while training a T5-small model on the Squad dataset using the transformers and datasets libraries. The error occurs even with a minimal reproducible example, suggesting a potential bug or incompatibility.
### Steps to reproduce the bug
1.Install the required libraries: !pip install transformers datasets
2.Run the following code:
!pip install transformers datasets
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, TrainingArguments, Trainer, DataCollatorWithPadding
# Load a small, publicly available dataset
from datasets import load_dataset
dataset = load_dataset("squad", split="train[:100]") # Use a small subset for testing
# Load a pre-trained model and tokenizer
model_name = "t5-small"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
# Define a basic data collator
data_collator = DataCollatorWithPadding(tokenizer=tokenizer)
# Define training arguments
training_args = TrainingArguments(
output_dir="./results",
per_device_train_batch_size=2,
num_train_epochs=1,
)
# Create a trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=dataset,
data_collator=data_collator,
)
# Train the model
trainer.train()
### Expected behavior
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
[<ipython-input-23-f13a4b23c001>](https://localhost:8080/#) in <cell line: 34>()
32
33 # Train the model
---> 34 trainer.train()
10 frames
[/usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py](https://localhost:8080/#) in _check_valid_index_key(key, size)
427 if isinstance(key, int):
428 if (key < 0 and key + size < 0) or (key >= size):
--> 429 raise IndexError(f"Invalid key: {key} is out of bounds for size {size}")
430 return
431 elif isinstance(key, slice):
IndexError: Invalid key: 42 is out of bounds for size 0
### Environment info
transformers version:4.41.2
datasets version:1.18.4
Python version:3.10.12
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6973/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6973/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/7259
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7259/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7259/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7259/events
|
https://github.com/huggingface/datasets/pull/7259
| 2,618,909,241
|
PR_kwDODunzps6AIEY-
| 7,259
|
Don't embed videos
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7259). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2024-10-28T16:25:10Z
| 2024-10-28T16:27:34Z
| 2024-10-28T16:26:01Z
|
MEMBER
| null | null | null |
don't include video bytes when running download_and_prepare(format="parquet")
this also affects push_to_hub which will just upload the local paths of the videos though
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7259/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7259/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7259.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7259",
"merged_at": "2024-10-28T16:26:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7259.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7259"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5842
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5842/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5842/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5842/events
|
https://github.com/huggingface/datasets/issues/5842
| 1,705,510,602
|
I_kwDODunzps5lqAbK
| 5,842
|
Remove columns in interable dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17240858?v=4",
"events_url": "https://api.github.com/users/surya-narayanan/events{/privacy}",
"followers_url": "https://api.github.com/users/surya-narayanan/followers",
"following_url": "https://api.github.com/users/surya-narayanan/following{/other_user}",
"gists_url": "https://api.github.com/users/surya-narayanan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/surya-narayanan",
"id": 17240858,
"login": "surya-narayanan",
"node_id": "MDQ6VXNlcjE3MjQwODU4",
"organizations_url": "https://api.github.com/users/surya-narayanan/orgs",
"received_events_url": "https://api.github.com/users/surya-narayanan/received_events",
"repos_url": "https://api.github.com/users/surya-narayanan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/surya-narayanan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/surya-narayanan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/surya-narayanan",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Transferring this issue as it's related to the 🤗 Datasets library ",
"Hi @surya-narayanan! Could you provide some code snippet?",
"This method has been recently added to the `IterableDataset`, so you need to update the `datasets`' installation (`pip install -U datasets`) to use it."
] | 2023-05-11T03:48:46Z
| 2023-06-21T16:36:42Z
| 2023-06-21T16:36:41Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
Right now, remove_columns() produces a NotImplementedError for iterable style datasets
### Motivation
It would be great to have the same functionality irrespective of whether one is using an iterable or a map-style dataset
### Your contribution
hope and courage.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5842/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5842/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6004
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6004/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6004/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6004/events
|
https://github.com/huggingface/datasets/pull/6004
| 1,786,636,368
|
PR_kwDODunzps5UjN2h
| 6,004
|
Misc improvements
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006897 / 0.011353 (-0.004456) | 0.004207 / 0.011008 (-0.006802) | 0.104828 / 0.038508 (0.066320) | 0.048054 / 0.023109 (0.024945) | 0.373991 / 0.275898 (0.098093) | 0.426740 / 0.323480 (0.103260) | 0.005540 / 0.007986 (-0.002446) | 0.003531 / 0.004328 (-0.000797) | 0.079304 / 0.004250 (0.075053) | 0.066996 / 0.037052 (0.029944) | 0.370675 / 0.258489 (0.112186) | 0.414154 / 0.293841 (0.120313) | 0.031567 / 0.128546 (-0.096979) | 0.008843 / 0.075646 (-0.066803) | 0.357426 / 0.419271 (-0.061845) | 0.067040 / 0.043533 (0.023508) | 0.362384 / 0.255139 (0.107245) | 0.376056 / 0.283200 (0.092856) | 0.032985 / 0.141683 (-0.108697) | 1.560603 / 1.452155 (0.108448) | 1.619024 / 1.492716 (0.126308) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229059 / 0.018006 (0.211053) | 0.440513 / 0.000490 (0.440023) | 0.004647 / 0.000200 (0.004447) | 0.000085 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029517 / 0.037411 (-0.007894) | 0.120974 / 0.014526 (0.106448) | 0.125070 / 0.176557 (-0.051486) | 0.184695 / 0.737135 (-0.552441) | 0.130244 / 0.296338 (-0.166095) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436930 / 0.215209 (0.221721) | 4.356118 / 2.077655 (2.278463) | 2.049169 / 1.504120 (0.545049) | 1.842898 / 1.541195 (0.301703) | 1.918948 / 1.468490 (0.450458) | 0.553573 / 4.584777 (-4.031204) | 3.883195 / 3.745712 (0.137483) | 3.209780 / 5.269862 (-2.060081) | 1.551707 / 4.565676 (-3.013970) | 0.068181 / 0.424275 (-0.356094) | 0.012370 / 0.007607 (0.004762) | 0.539899 / 0.226044 (0.313854) | 5.380008 / 2.268929 (3.111079) | 2.518178 / 55.444624 (-52.926446) | 2.174190 / 6.876477 (-4.702286) | 2.317812 / 2.142072 (0.175740) | 0.674154 / 4.805227 (-4.131073) | 0.149313 / 6.500664 (-6.351351) | 0.068297 / 0.075469 (-0.007172) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.261426 / 1.841788 (-0.580362) | 15.316378 / 8.074308 (7.242070) | 13.573512 / 10.191392 (3.382120) | 0.190022 / 0.680424 (-0.490401) | 0.018697 / 0.534201 (-0.515504) | 0.448122 / 0.579283 (-0.131161) | 0.435044 / 0.434364 (0.000681) | 0.550065 / 0.540337 (0.009728) | 0.653547 / 1.386936 (-0.733389) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007116 / 0.011353 (-0.004237) | 0.004375 / 0.011008 (-0.006633) | 0.081793 / 0.038508 (0.043285) | 0.047980 / 0.023109 (0.024871) | 0.392185 / 0.275898 (0.116287) | 0.462263 / 0.323480 (0.138783) | 0.005574 / 0.007986 (-0.002412) | 0.003552 / 0.004328 (-0.000776) | 0.080413 / 0.004250 (0.076162) | 0.065539 / 0.037052 (0.028487) | 0.413137 / 0.258489 (0.154648) | 0.467377 / 0.293841 (0.173536) | 0.034386 / 0.128546 (-0.094160) | 0.009183 / 0.075646 (-0.066464) | 0.087542 / 0.419271 (-0.331730) | 0.053954 / 0.043533 (0.010421) | 0.385096 / 0.255139 (0.129957) | 0.404900 / 0.283200 (0.121701) | 0.025908 / 0.141683 (-0.115775) | 1.550159 / 1.452155 (0.098005) | 1.598794 / 1.492716 (0.106078) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.246222 / 0.018006 (0.228216) | 0.441095 / 0.000490 (0.440605) | 0.006863 / 0.000200 (0.006663) | 0.000109 / 0.000054 (0.000055) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032179 / 0.037411 (-0.005233) | 0.120112 / 0.014526 (0.105586) | 0.129326 / 0.176557 (-0.047230) | 0.184542 / 0.737135 (-0.552593) | 0.135038 / 0.296338 (-0.161300) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.459002 / 0.215209 (0.243793) | 4.580258 / 2.077655 (2.502604) | 2.296689 / 1.504120 (0.792569) | 2.104338 / 1.541195 (0.563143) | 2.182896 / 1.468490 (0.714406) | 0.546447 / 4.584777 (-4.038330) | 3.854047 / 3.745712 (0.108335) | 1.873829 / 5.269862 (-3.396032) | 1.116484 / 4.565676 (-3.449193) | 0.067158 / 0.424275 (-0.357117) | 0.012035 / 0.007607 (0.004428) | 0.556642 / 0.226044 (0.330597) | 5.574436 / 2.268929 (3.305508) | 2.828223 / 55.444624 (-52.616402) | 2.519851 / 6.876477 (-4.356626) | 2.668594 / 2.142072 (0.526521) | 0.675989 / 4.805227 (-4.129238) | 0.146075 / 6.500664 (-6.354589) | 0.067788 / 0.075469 (-0.007681) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.345958 / 1.841788 (-0.495830) | 15.672748 / 8.074308 (7.598440) | 14.937583 / 10.191392 (4.746191) | 0.163479 / 0.680424 (-0.516945) | 0.018364 / 0.534201 (-0.515837) | 0.433296 / 0.579283 (-0.145987) | 0.432463 / 0.434364 (-0.001901) | 0.512000 / 0.540337 (-0.028338) | 0.619397 / 1.386936 (-0.767539) |\n\n</details>\n</details>\n\n\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010097 / 0.011353 (-0.001256) | 0.005070 / 0.011008 (-0.005939) | 0.118638 / 0.038508 (0.080130) | 0.043651 / 0.023109 (0.020542) | 0.356074 / 0.275898 (0.080176) | 0.414578 / 0.323480 (0.091098) | 0.005939 / 0.007986 (-0.002046) | 0.004927 / 0.004328 (0.000598) | 0.089545 / 0.004250 (0.085294) | 0.067533 / 0.037052 (0.030481) | 0.371550 / 0.258489 (0.113061) | 0.417808 / 0.293841 (0.123967) | 0.045186 / 0.128546 (-0.083361) | 0.015763 / 0.075646 (-0.059883) | 0.393304 / 0.419271 (-0.025967) | 0.065123 / 0.043533 (0.021591) | 0.345057 / 0.255139 (0.089918) | 0.378809 / 0.283200 (0.095610) | 0.033243 / 0.141683 (-0.108440) | 1.679956 / 1.452155 (0.227802) | 1.775456 / 1.492716 (0.282739) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229723 / 0.018006 (0.211717) | 0.554630 / 0.000490 (0.554140) | 0.008729 / 0.000200 (0.008529) | 0.000183 / 0.000054 (0.000129) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027284 / 0.037411 (-0.010128) | 0.114741 / 0.014526 (0.100215) | 0.129188 / 0.176557 (-0.047369) | 0.189270 / 0.737135 (-0.547866) | 0.126000 / 0.296338 (-0.170339) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.580417 / 0.215209 (0.365208) | 5.829337 / 2.077655 (3.751683) | 2.421191 / 1.504120 (0.917071) | 2.063673 / 1.541195 (0.522479) | 2.133427 / 1.468490 (0.664937) | 0.830964 / 4.584777 (-3.753813) | 5.107139 / 3.745712 (1.361427) | 4.599451 / 5.269862 (-0.670410) | 2.406502 / 4.565676 (-2.159175) | 0.100422 / 0.424275 (-0.323853) | 0.011850 / 0.007607 (0.004243) | 0.741881 / 0.226044 (0.515836) | 7.425689 / 2.268929 (5.156760) | 3.068948 / 55.444624 (-52.375676) | 2.496292 / 6.876477 (-4.380184) | 2.566420 / 2.142072 (0.424348) | 1.093084 / 4.805227 (-3.712144) | 0.224106 / 6.500664 (-6.276558) | 0.084549 / 0.075469 (0.009080) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.416315 / 1.841788 (-0.425473) | 16.306901 / 8.074308 (8.232593) | 19.792419 / 10.191392 (9.601027) | 0.224223 / 0.680424 (-0.456201) | 0.026385 / 0.534201 (-0.507816) | 0.463460 / 0.579283 (-0.115823) | 0.598385 / 0.434364 (0.164021) | 0.543981 / 0.540337 (0.003644) | 0.647454 / 1.386936 (-0.739482) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009470 / 0.011353 (-0.001883) | 0.004800 / 0.011008 (-0.006208) | 0.094276 / 0.038508 (0.055768) | 0.045157 / 0.023109 (0.022048) | 0.397302 / 0.275898 (0.121404) | 0.474213 / 0.323480 (0.150733) | 0.005826 / 0.007986 (-0.002160) | 0.003724 / 0.004328 (-0.000605) | 0.090060 / 0.004250 (0.085809) | 0.066671 / 0.037052 (0.029618) | 0.439560 / 0.258489 (0.181071) | 0.468598 / 0.293841 (0.174757) | 0.044549 / 0.128546 (-0.083997) | 0.014000 / 0.075646 (-0.061646) | 0.110457 / 0.419271 (-0.308815) | 0.065898 / 0.043533 (0.022365) | 0.408101 / 0.255139 (0.152962) | 0.433473 / 0.283200 (0.150273) | 0.038438 / 0.141683 (-0.103245) | 1.767781 / 1.452155 (0.315626) | 1.791575 / 1.492716 (0.298859) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230257 / 0.018006 (0.212251) | 0.492280 / 0.000490 (0.491790) | 0.005110 / 0.000200 (0.004910) | 0.000119 / 0.000054 (0.000065) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028854 / 0.037411 (-0.008557) | 0.111702 / 0.014526 (0.097176) | 0.122040 / 0.176557 (-0.054517) | 0.179103 / 0.737135 (-0.558032) | 0.128869 / 0.296338 (-0.167470) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.634795 / 0.215209 (0.419586) | 6.204760 / 2.077655 (4.127105) | 2.692479 / 1.504120 (1.188359) | 2.324260 / 1.541195 (0.783066) | 2.380640 / 1.468490 (0.912149) | 0.887827 / 4.584777 (-3.696950) | 5.251648 / 3.745712 (1.505935) | 2.632767 / 5.269862 (-2.637095) | 1.745721 / 4.565676 (-2.819955) | 0.108364 / 0.424275 (-0.315911) | 0.013409 / 0.007607 (0.005802) | 0.783427 / 0.226044 (0.557383) | 7.765144 / 2.268929 (5.496216) | 3.340686 / 55.444624 (-52.103938) | 2.715340 / 6.876477 (-4.161137) | 2.768604 / 2.142072 (0.626531) | 1.119746 / 4.805227 (-3.685481) | 0.210804 / 6.500664 (-6.289860) | 0.072600 / 0.075469 (-0.002869) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.517334 / 1.841788 (-0.324454) | 17.046837 / 8.074308 (8.972529) | 19.371090 / 10.191392 (9.179698) | 0.194275 / 0.680424 (-0.486148) | 0.026712 / 0.534201 (-0.507488) | 0.462731 / 0.579283 (-0.116552) | 0.568958 / 0.434364 (0.134595) | 0.555707 / 0.540337 (0.015370) | 0.663654 / 1.386936 (-0.723283) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006423 / 0.011353 (-0.004930) | 0.003882 / 0.011008 (-0.007126) | 0.082976 / 0.038508 (0.044468) | 0.071281 / 0.023109 (0.048171) | 0.311367 / 0.275898 (0.035469) | 0.348228 / 0.323480 (0.024748) | 0.005315 / 0.007986 (-0.002671) | 0.003326 / 0.004328 (-0.001003) | 0.064641 / 0.004250 (0.060391) | 0.056134 / 0.037052 (0.019081) | 0.314071 / 0.258489 (0.055582) | 0.360534 / 0.293841 (0.066693) | 0.030642 / 0.128546 (-0.097904) | 0.008301 / 0.075646 (-0.067345) | 0.285820 / 0.419271 (-0.133451) | 0.069241 / 0.043533 (0.025708) | 0.313995 / 0.255139 (0.058856) | 0.336656 / 0.283200 (0.053457) | 0.031686 / 0.141683 (-0.109997) | 1.467627 / 1.452155 (0.015472) | 1.536493 / 1.492716 (0.043777) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.196518 / 0.018006 (0.178512) | 0.458235 / 0.000490 (0.457745) | 0.005599 / 0.000200 (0.005399) | 0.000088 / 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027371 / 0.037411 (-0.010040) | 0.080986 / 0.014526 (0.066460) | 0.093296 / 0.176557 (-0.083260) | 0.150592 / 0.737135 (-0.586543) | 0.094150 / 0.296338 (-0.202188) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.379412 / 0.215209 (0.164202) | 3.797927 / 2.077655 (1.720272) | 1.830654 / 1.504120 (0.326534) | 1.669569 / 1.541195 (0.128374) | 1.746738 / 1.468490 (0.278248) | 0.479536 / 4.584777 (-4.105241) | 3.592867 / 3.745712 (-0.152845) | 5.468098 / 5.269862 (0.198237) | 3.268013 / 4.565676 (-1.297663) | 0.056635 / 0.424275 (-0.367640) | 0.007224 / 0.007607 (-0.000383) | 0.456681 / 0.226044 (0.230636) | 4.566736 / 2.268929 (2.297807) | 2.362831 / 55.444624 (-53.081793) | 1.965141 / 6.876477 (-4.911336) | 2.156905 / 2.142072 (0.014833) | 0.572543 / 4.805227 (-4.232684) | 0.132203 / 6.500664 (-6.368461) | 0.059254 / 0.075469 (-0.016215) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.256134 / 1.841788 (-0.585654) | 19.905438 / 8.074308 (11.831130) | 14.179556 / 10.191392 (3.988164) | 0.168043 / 0.680424 (-0.512381) | 0.018215 / 0.534201 (-0.515986) | 0.392740 / 0.579283 (-0.186543) | 0.398397 / 0.434364 (-0.035967) | 0.463806 / 0.540337 (-0.076531) | 0.616248 / 1.386936 (-0.770688) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006564 / 0.011353 (-0.004789) | 0.003923 / 0.011008 (-0.007085) | 0.063929 / 0.038508 (0.025421) | 0.073780 / 0.023109 (0.050671) | 0.360242 / 0.275898 (0.084344) | 0.395078 / 0.323480 (0.071598) | 0.005265 / 0.007986 (-0.002720) | 0.003229 / 0.004328 (-0.001100) | 0.064094 / 0.004250 (0.059843) | 0.057468 / 0.037052 (0.020416) | 0.369530 / 0.258489 (0.111041) | 0.411159 / 0.293841 (0.117318) | 0.031278 / 0.128546 (-0.097268) | 0.008424 / 0.075646 (-0.067222) | 0.070411 / 0.419271 (-0.348860) | 0.048714 / 0.043533 (0.005181) | 0.361280 / 0.255139 (0.106141) | 0.382468 / 0.283200 (0.099269) | 0.023059 / 0.141683 (-0.118624) | 1.452369 / 1.452155 (0.000215) | 1.519192 / 1.492716 (0.026475) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223745 / 0.018006 (0.205739) | 0.442086 / 0.000490 (0.441596) | 0.000379 / 0.000200 (0.000179) | 0.000055 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030919 / 0.037411 (-0.006493) | 0.088483 / 0.014526 (0.073958) | 0.101165 / 0.176557 (-0.075391) | 0.154332 / 0.737135 (-0.582804) | 0.103030 / 0.296338 (-0.193309) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.414520 / 0.215209 (0.199311) | 4.126754 / 2.077655 (2.049099) | 2.142677 / 1.504120 (0.638557) | 1.995300 / 1.541195 (0.454106) | 2.101678 / 1.468490 (0.633188) | 0.481099 / 4.584777 (-4.103678) | 3.562813 / 3.745712 (-0.182900) | 3.392463 / 5.269862 (-1.877399) | 1.983943 / 4.565676 (-2.581734) | 0.056594 / 0.424275 (-0.367681) | 0.007216 / 0.007607 (-0.000391) | 0.495085 / 0.226044 (0.269041) | 4.955640 / 2.268929 (2.686712) | 2.629434 / 55.444624 (-52.815191) | 2.269577 / 6.876477 (-4.606900) | 2.357708 / 2.142072 (0.215635) | 0.612370 / 4.805227 (-4.192857) | 0.131169 / 6.500664 (-6.369495) | 0.061029 / 0.075469 (-0.014440) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.339438 / 1.841788 (-0.502350) | 19.757611 / 8.074308 (11.683303) | 14.246254 / 10.191392 (4.054862) | 0.170750 / 0.680424 (-0.509674) | 0.018192 / 0.534201 (-0.516009) | 0.395693 / 0.579283 (-0.183590) | 0.411003 / 0.434364 (-0.023361) | 0.478531 / 0.540337 (-0.061806) | 0.650291 / 1.386936 (-0.736645) |\n\n</details>\n</details>\n\n\n"
] | 2023-07-03T18:29:14Z
| 2023-07-06T17:04:11Z
| 2023-07-06T16:55:25Z
|
COLLABORATOR
| null | null | null |
Contains the following improvements:
* fixes a "share dataset" link in README and modifies the "hosting" part in the disclaimer section
* updates `Makefile` to also run the style checks on `utils` and `setup.py`
* deletes a test for GH-hosted datasets (no longer supported)
* deletes `convert_dataset.sh` (outdated)
* aligns `utils/release.py` with `transformers` (the current version is outdated)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6004/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6004/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6004.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6004",
"merged_at": "2023-07-06T16:55:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6004.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6004"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6335
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6335/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6335/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6335/events
|
https://github.com/huggingface/datasets/pull/6335
| 1,956,740,818
|
PR_kwDODunzps5dggIV
| 6,335
|
Support fsspec 2023.10.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006013 / 0.011353 (-0.005340) | 0.003647 / 0.011008 (-0.007362) | 0.081781 / 0.038508 (0.043273) | 0.059020 / 0.023109 (0.035911) | 0.321823 / 0.275898 (0.045925) | 0.350159 / 0.323480 (0.026679) | 0.003599 / 0.007986 (-0.004386) | 0.002877 / 0.004328 (-0.001452) | 0.063941 / 0.004250 (0.059690) | 0.049460 / 0.037052 (0.012408) | 0.330185 / 0.258489 (0.071696) | 0.362220 / 0.293841 (0.068379) | 0.027613 / 0.128546 (-0.100934) | 0.007976 / 0.075646 (-0.067670) | 0.263386 / 0.419271 (-0.155885) | 0.045504 / 0.043533 (0.001971) | 0.321172 / 0.255139 (0.066033) | 0.345291 / 0.283200 (0.062091) | 0.023133 / 0.141683 (-0.118550) | 1.435816 / 1.452155 (-0.016339) | 1.557241 / 1.492716 (0.064524) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222228 / 0.018006 (0.204222) | 0.420008 / 0.000490 (0.419518) | 0.008598 / 0.000200 (0.008398) | 0.000343 / 0.000054 (0.000288) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023725 / 0.037411 (-0.013686) | 0.073023 / 0.014526 (0.058497) | 0.814888 / 0.176557 (0.638332) | 0.294122 / 0.737135 (-0.443013) | 0.088945 / 0.296338 (-0.207393) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.393561 / 0.215209 (0.178352) | 3.946544 / 2.077655 (1.868890) | 1.916476 / 1.504120 (0.412356) | 1.721544 / 1.541195 (0.180349) | 1.768583 / 1.468490 (0.300093) | 0.508067 / 4.584777 (-4.076710) | 3.047832 / 3.745712 (-0.697880) | 2.952842 / 5.269862 (-2.317020) | 1.869337 / 4.565676 (-2.696339) | 0.057812 / 0.424275 (-0.366463) | 0.006694 / 0.007607 (-0.000913) | 0.463007 / 0.226044 (0.236963) | 4.635087 / 2.268929 (2.366158) | 2.419833 / 55.444624 (-53.024792) | 2.018519 / 6.876477 (-4.857958) | 2.043430 / 2.142072 (-0.098643) | 0.590895 / 4.805227 (-4.214333) | 0.126113 / 6.500664 (-6.374552) | 0.061045 / 0.075469 (-0.014424) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.226850 / 1.841788 (-0.614937) | 17.336630 / 8.074308 (9.262322) | 13.651049 / 10.191392 (3.459656) | 0.143308 / 0.680424 (-0.537116) | 0.016938 / 0.534201 (-0.517263) | 0.332829 / 0.579283 (-0.246454) | 0.368684 / 0.434364 (-0.065680) | 0.385848 / 0.540337 (-0.154489) | 0.546391 / 1.386936 (-0.840545) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006149 / 0.011353 (-0.005204) | 0.003818 / 0.011008 (-0.007191) | 0.064012 / 0.038508 (0.025504) | 0.059846 / 0.023109 (0.036737) | 0.455928 / 0.275898 (0.180030) | 0.480736 / 0.323480 (0.157256) | 0.004874 / 0.007986 (-0.003111) | 0.002877 / 0.004328 (-0.001451) | 0.064195 / 0.004250 (0.059944) | 0.048146 / 0.037052 (0.011094) | 0.452638 / 0.258489 (0.194149) | 0.484339 / 0.293841 (0.190499) | 0.028832 / 0.128546 (-0.099715) | 0.008162 / 0.075646 (-0.067485) | 0.069855 / 0.419271 (-0.349417) | 0.041429 / 0.043533 (-0.002104) | 0.453282 / 0.255139 (0.198143) | 0.473812 / 0.283200 (0.190613) | 0.021186 / 0.141683 (-0.120497) | 1.465207 / 1.452155 (0.013052) | 1.508216 / 1.492716 (0.015500) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.242491 / 0.018006 (0.224485) | 0.421219 / 0.000490 (0.420730) | 0.011201 / 0.000200 (0.011001) | 0.000083 / 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027015 / 0.037411 (-0.010396) | 0.080465 / 0.014526 (0.065939) | 0.092622 / 0.176557 (-0.083934) | 0.146111 / 0.737135 (-0.591024) | 0.091546 / 0.296338 (-0.204793) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.458351 / 0.215209 (0.243142) | 4.591454 / 2.077655 (2.513799) | 2.508156 / 1.504120 (1.004037) | 2.328771 / 1.541195 (0.787576) | 2.423251 / 1.468490 (0.954761) | 0.508504 / 4.584777 (-4.076273) | 3.133789 / 3.745712 (-0.611923) | 2.862777 / 5.269862 (-2.407084) | 1.886327 / 4.565676 (-2.679350) | 0.058017 / 0.424275 (-0.366258) | 0.006496 / 0.007607 (-0.001111) | 0.529629 / 0.226044 (0.303585) | 5.310338 / 2.268929 (3.041409) | 2.973075 / 55.444624 (-52.471549) | 2.601313 / 6.876477 (-4.275163) | 2.777348 / 2.142072 (0.635275) | 0.593711 / 4.805227 (-4.211516) | 0.125453 / 6.500664 (-6.375211) | 0.061034 / 0.075469 (-0.014435) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.374391 / 1.841788 (-0.467397) | 18.768026 / 8.074308 (10.693718) | 15.053637 / 10.191392 (4.862245) | 0.158253 / 0.680424 (-0.522171) | 0.018126 / 0.534201 (-0.516075) | 0.337427 / 0.579283 (-0.241856) | 0.391678 / 0.434364 (-0.042686) | 0.398524 / 0.540337 (-0.141813) | 0.558629 / 1.386936 (-0.828307) |\n\n</details>\n</details>\n\n\n",
"I think https://github.com/huggingface/datasets/pull/6334 fixes it already no ?",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006432 / 0.011353 (-0.004921) | 0.003861 / 0.011008 (-0.007147) | 0.084132 / 0.038508 (0.045624) | 0.069391 / 0.023109 (0.046282) | 0.341081 / 0.275898 (0.065183) | 0.375975 / 0.323480 (0.052495) | 0.003962 / 0.007986 (-0.004024) | 0.003235 / 0.004328 (-0.001094) | 0.064927 / 0.004250 (0.060677) | 0.054190 / 0.037052 (0.017137) | 0.350719 / 0.258489 (0.092230) | 0.393216 / 0.293841 (0.099375) | 0.031002 / 0.128546 (-0.097544) | 0.008416 / 0.075646 (-0.067230) | 0.289268 / 0.419271 (-0.130003) | 0.052167 / 0.043533 (0.008634) | 0.347559 / 0.255139 (0.092420) | 0.370908 / 0.283200 (0.087709) | 0.022540 / 0.141683 (-0.119142) | 1.486297 / 1.452155 (0.034143) | 1.576968 / 1.492716 (0.084252) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237048 / 0.018006 (0.219042) | 0.452065 / 0.000490 (0.451575) | 0.013963 / 0.000200 (0.013763) | 0.000242 / 0.000054 (0.000188) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028084 / 0.037411 (-0.009327) | 0.081271 / 0.014526 (0.066745) | 0.096490 / 0.176557 (-0.080067) | 0.152106 / 0.737135 (-0.585030) | 0.096174 / 0.296338 (-0.200164) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.386585 / 0.215209 (0.171375) | 3.854996 / 2.077655 (1.777342) | 1.832898 / 1.504120 (0.328778) | 1.662832 / 1.541195 (0.121638) | 1.730753 / 1.468490 (0.262263) | 0.485286 / 4.584777 (-4.099491) | 3.571410 / 3.745712 (-0.174302) | 3.373035 / 5.269862 (-1.896826) | 1.995570 / 4.565676 (-2.570107) | 0.056711 / 0.424275 (-0.367564) | 0.007447 / 0.007607 (-0.000160) | 0.462985 / 0.226044 (0.236941) | 4.617186 / 2.268929 (2.348257) | 2.313915 / 55.444624 (-53.130709) | 1.961697 / 6.876477 (-4.914780) | 1.990410 / 2.142072 (-0.151662) | 0.580536 / 4.805227 (-4.224692) | 0.146275 / 6.500664 (-6.354389) | 0.059458 / 0.075469 (-0.016011) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.274841 / 1.841788 (-0.566947) | 18.641853 / 8.074308 (10.567545) | 13.977525 / 10.191392 (3.786133) | 0.151469 / 0.680424 (-0.528955) | 0.018111 / 0.534201 (-0.516090) | 0.393243 / 0.579283 (-0.186040) | 0.412310 / 0.434364 (-0.022054) | 0.461646 / 0.540337 (-0.078692) | 0.633016 / 1.386936 (-0.753920) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006496 / 0.011353 (-0.004857) | 0.003973 / 0.011008 (-0.007035) | 0.064527 / 0.038508 (0.026019) | 0.069390 / 0.023109 (0.046281) | 0.401162 / 0.275898 (0.125264) | 0.431031 / 0.323480 (0.107551) | 0.005244 / 0.007986 (-0.002741) | 0.003283 / 0.004328 (-0.001046) | 0.064931 / 0.004250 (0.060680) | 0.054402 / 0.037052 (0.017350) | 0.397917 / 0.258489 (0.139428) | 0.436728 / 0.293841 (0.142887) | 0.031932 / 0.128546 (-0.096614) | 0.008557 / 0.075646 (-0.067089) | 0.073336 / 0.419271 (-0.345935) | 0.047559 / 0.043533 (0.004026) | 0.395825 / 0.255139 (0.140686) | 0.423002 / 0.283200 (0.139802) | 0.021708 / 0.141683 (-0.119975) | 1.501140 / 1.452155 (0.048985) | 1.558376 / 1.492716 (0.065660) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.289522 / 0.018006 (0.271516) | 0.449078 / 0.000490 (0.448589) | 0.034174 / 0.000200 (0.033974) | 0.000396 / 0.000054 (0.000342) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032533 / 0.037411 (-0.004878) | 0.093398 / 0.014526 (0.078872) | 0.106930 / 0.176557 (-0.069626) | 0.158743 / 0.737135 (-0.578393) | 0.106904 / 0.296338 (-0.189435) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.427479 / 0.215209 (0.212270) | 4.271758 / 2.077655 (2.194103) | 2.298770 / 1.504120 (0.794650) | 2.134906 / 1.541195 (0.593712) | 2.220487 / 1.468490 (0.751996) | 0.490506 / 4.584777 (-4.094270) | 3.593876 / 3.745712 (-0.151836) | 3.225656 / 5.269862 (-2.044205) | 2.004434 / 4.565676 (-2.561243) | 0.058015 / 0.424275 (-0.366260) | 0.007221 / 0.007607 (-0.000387) | 0.504928 / 0.226044 (0.278884) | 5.049547 / 2.268929 (2.780618) | 2.743843 / 55.444624 (-52.700781) | 2.398399 / 6.876477 (-4.478078) | 2.562939 / 2.142072 (0.420867) | 0.597229 / 4.805227 (-4.207998) | 0.134664 / 6.500664 (-6.366001) | 0.059612 / 0.075469 (-0.015857) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.369692 / 1.841788 (-0.472095) | 19.065326 / 8.074308 (10.991018) | 14.404508 / 10.191392 (4.213116) | 0.175809 / 0.680424 (-0.504615) | 0.020137 / 0.534201 (-0.514064) | 0.394043 / 0.579283 (-0.185240) | 0.424772 / 0.434364 (-0.009592) | 0.475587 / 0.540337 (-0.064751) | 0.644275 / 1.386936 (-0.742661) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007259 / 0.011353 (-0.004094) | 0.004396 / 0.011008 (-0.006612) | 0.096456 / 0.038508 (0.057948) | 0.078752 / 0.023109 (0.055643) | 0.359215 / 0.275898 (0.083317) | 0.396927 / 0.323480 (0.073448) | 0.005611 / 0.007986 (-0.002375) | 0.003687 / 0.004328 (-0.000641) | 0.072794 / 0.004250 (0.068544) | 0.059794 / 0.037052 (0.022741) | 0.372352 / 0.258489 (0.113863) | 0.414038 / 0.293841 (0.120197) | 0.034490 / 0.128546 (-0.094056) | 0.009790 / 0.075646 (-0.065857) | 0.326338 / 0.419271 (-0.092934) | 0.058582 / 0.043533 (0.015049) | 0.354221 / 0.255139 (0.099082) | 0.386669 / 0.283200 (0.103469) | 0.025356 / 0.141683 (-0.116327) | 1.664104 / 1.452155 (0.211950) | 1.766825 / 1.492716 (0.274108) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.251107 / 0.018006 (0.233101) | 0.478833 / 0.000490 (0.478344) | 0.010776 / 0.000200 (0.010577) | 0.000292 / 0.000054 (0.000238) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032869 / 0.037411 (-0.004543) | 0.098449 / 0.014526 (0.083923) | 0.109954 / 0.176557 (-0.066602) | 0.176786 / 0.737135 (-0.560350) | 0.113477 / 0.296338 (-0.182862) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431169 / 0.215209 (0.215960) | 4.303239 / 2.077655 (2.225585) | 2.088885 / 1.504120 (0.584765) | 1.895900 / 1.541195 (0.354706) | 1.997442 / 1.468490 (0.528952) | 0.541840 / 4.584777 (-4.042937) | 3.991982 / 3.745712 (0.246270) | 3.842421 / 5.269862 (-1.427440) | 2.281150 / 4.565676 (-2.284526) | 0.063851 / 0.424275 (-0.360425) | 0.008470 / 0.007607 (0.000863) | 0.515886 / 0.226044 (0.289841) | 5.202908 / 2.268929 (2.933980) | 2.662789 / 55.444624 (-52.781835) | 2.266731 / 6.876477 (-4.609746) | 2.343760 / 2.142072 (0.201688) | 0.641050 / 4.805227 (-4.164177) | 0.148236 / 6.500664 (-6.352428) | 0.067422 / 0.075469 (-0.008047) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.475729 / 1.841788 (-0.366059) | 22.401583 / 8.074308 (14.327274) | 15.886237 / 10.191392 (5.694845) | 0.171828 / 0.680424 (-0.508595) | 0.022161 / 0.534201 (-0.512040) | 0.465873 / 0.579283 (-0.113411) | 0.476386 / 0.434364 (0.042022) | 0.538317 / 0.540337 (-0.002020) | 0.754375 / 1.386936 (-0.632561) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007429 / 0.011353 (-0.003924) | 0.004592 / 0.011008 (-0.006416) | 0.072315 / 0.038508 (0.033807) | 0.080806 / 0.023109 (0.057697) | 0.444607 / 0.275898 (0.168709) | 0.476970 / 0.323480 (0.153490) | 0.006030 / 0.007986 (-0.001956) | 0.003755 / 0.004328 (-0.000573) | 0.074602 / 0.004250 (0.070352) | 0.061846 / 0.037052 (0.024794) | 0.450928 / 0.258489 (0.192439) | 0.493932 / 0.293841 (0.200091) | 0.037398 / 0.128546 (-0.091148) | 0.009807 / 0.075646 (-0.065840) | 0.080531 / 0.419271 (-0.338741) | 0.054052 / 0.043533 (0.010519) | 0.453034 / 0.255139 (0.197895) | 0.464959 / 0.283200 (0.181760) | 0.024718 / 0.141683 (-0.116965) | 1.687552 / 1.452155 (0.235397) | 1.765746 / 1.492716 (0.273029) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.266998 / 0.018006 (0.248992) | 0.479832 / 0.000490 (0.479342) | 0.005429 / 0.000200 (0.005229) | 0.000117 / 0.000054 (0.000062) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.038885 / 0.037411 (0.001474) | 0.105931 / 0.014526 (0.091405) | 0.120880 / 0.176557 (-0.055677) | 0.184006 / 0.737135 (-0.553130) | 0.120750 / 0.296338 (-0.175589) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.478626 / 0.215209 (0.263417) | 4.797355 / 2.077655 (2.719700) | 2.582758 / 1.504120 (1.078638) | 2.396488 / 1.541195 (0.855293) | 2.515597 / 1.468490 (1.047107) | 0.544541 / 4.584777 (-4.040236) | 4.150702 / 3.745712 (0.404990) | 3.676837 / 5.269862 (-1.593024) | 2.287275 / 4.565676 (-2.278402) | 0.064602 / 0.424275 (-0.359673) | 0.008253 / 0.007607 (0.000646) | 0.576201 / 0.226044 (0.350157) | 5.859839 / 2.268929 (3.590910) | 3.248603 / 55.444624 (-52.196021) | 2.841959 / 6.876477 (-4.034518) | 2.991120 / 2.142072 (0.849047) | 0.667755 / 4.805227 (-4.137472) | 0.151219 / 6.500664 (-6.349445) | 0.068990 / 0.075469 (-0.006479) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.572359 / 1.841788 (-0.269429) | 21.890279 / 8.074308 (13.815971) | 15.927473 / 10.191392 (5.736081) | 0.170388 / 0.680424 (-0.510036) | 0.023282 / 0.534201 (-0.510919) | 0.459371 / 0.579283 (-0.119912) | 0.468838 / 0.434364 (0.034475) | 0.546438 / 0.540337 (0.006101) | 0.746912 / 1.386936 (-0.640024) |\n\n</details>\n</details>\n\n\n",
"Yes, @lhoestq, you are right. I think we cross-send fixing PRs in a 15 minute interval... :sweat_smile: \r\n\r\nI would say the code in this PR is simpler and easier to understand, but feel free to ignore it.",
"I think the correct way it to check if \"file\" in in the tuple if it's a tuple (in case someone adds another protocol name for the local filesystem)"
] | 2023-10-23T09:29:17Z
| 2024-01-11T06:33:35Z
| 2023-11-14T14:17:40Z
|
MEMBER
| null | null | null |
Fix #6333.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6335/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6335/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6335.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6335",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6335.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6335"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5751
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5751/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5751/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5751/events
|
https://github.com/huggingface/datasets/pull/5751
| 1,668,333,316
|
PR_kwDODunzps5OVMuT
| 5,751
|
Consistent ArrayXD Python formatting + better NumPy/Pandas formatting
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010459 / 0.011353 (-0.000894) | 0.007009 / 0.011008 (-0.003999) | 0.153885 / 0.038508 (0.115377) | 0.037308 / 0.023109 (0.014199) | 0.431931 / 0.275898 (0.156033) | 0.452940 / 0.323480 (0.129461) | 0.008572 / 0.007986 (0.000586) | 0.007479 / 0.004328 (0.003150) | 0.093835 / 0.004250 (0.089584) | 0.050172 / 0.037052 (0.013120) | 0.428855 / 0.258489 (0.170366) | 0.517814 / 0.293841 (0.223974) | 0.058558 / 0.128546 (-0.069988) | 0.019550 / 0.075646 (-0.056096) | 0.449837 / 0.419271 (0.030566) | 0.069710 / 0.043533 (0.026177) | 0.444163 / 0.255139 (0.189024) | 0.469003 / 0.283200 (0.185803) | 0.114665 / 0.141683 (-0.027018) | 1.822415 / 1.452155 (0.370261) | 1.956360 / 1.492716 (0.463644) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237489 / 0.018006 (0.219483) | 0.556947 / 0.000490 (0.556457) | 0.006988 / 0.000200 (0.006789) | 0.000499 / 0.000054 (0.000444) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037047 / 0.037411 (-0.000364) | 0.133973 / 0.014526 (0.119447) | 0.137072 / 0.176557 (-0.039485) | 0.201520 / 0.737135 (-0.535615) | 0.144177 / 0.296338 (-0.152161) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.694853 / 0.215209 (0.479644) | 6.805746 / 2.077655 (4.728091) | 2.717864 / 1.504120 (1.213744) | 2.360529 / 1.541195 (0.819335) | 2.384403 / 1.468490 (0.915913) | 1.337512 / 4.584777 (-3.247265) | 5.734090 / 3.745712 (1.988378) | 5.344909 / 5.269862 (0.075047) | 2.906218 / 4.565676 (-1.659458) | 0.160148 / 0.424275 (-0.264127) | 0.015159 / 0.007607 (0.007551) | 0.871356 / 0.226044 (0.645312) | 8.550965 / 2.268929 (6.282037) | 3.613522 / 55.444624 (-51.831103) | 2.868508 / 6.876477 (-4.007969) | 2.912263 / 2.142072 (0.770190) | 1.652548 / 4.805227 (-3.152680) | 0.274117 / 6.500664 (-6.226547) | 0.085911 / 0.075469 (0.010442) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.624798 / 1.841788 (-0.216989) | 18.413303 / 8.074308 (10.338995) | 21.742854 / 10.191392 (11.551462) | 0.255937 / 0.680424 (-0.424487) | 0.029492 / 0.534201 (-0.504709) | 0.541932 / 0.579283 (-0.037351) | 0.638594 / 0.434364 (0.204230) | 0.607427 / 0.540337 (0.067090) | 0.763046 / 1.386936 (-0.623890) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.020543 / 0.011353 (0.009190) | 0.006079 / 0.011008 (-0.004929) | 0.100558 / 0.038508 (0.062050) | 0.039474 / 0.023109 (0.016365) | 0.468889 / 0.275898 (0.192991) | 0.477731 / 0.323480 (0.154251) | 0.006999 / 0.007986 (-0.000987) | 0.005845 / 0.004328 (0.001516) | 0.110022 / 0.004250 (0.105772) | 0.056885 / 0.037052 (0.019833) | 0.447296 / 0.258489 (0.188807) | 0.489007 / 0.293841 (0.195166) | 0.055086 / 0.128546 (-0.073460) | 0.020623 / 0.075646 (-0.055024) | 0.129599 / 0.419271 (-0.289672) | 0.064316 / 0.043533 (0.020784) | 0.446681 / 0.255139 (0.191542) | 0.488897 / 0.283200 (0.205698) | 0.119121 / 0.141683 (-0.022562) | 1.836248 / 1.452155 (0.384093) | 2.002456 / 1.492716 (0.509740) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.249344 / 0.018006 (0.231338) | 0.544320 / 0.000490 (0.543830) | 0.000459 / 0.000200 (0.000259) | 0.000088 / 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.038771 / 0.037411 (0.001359) | 0.129527 / 0.014526 (0.115002) | 0.144681 / 0.176557 (-0.031876) | 0.208237 / 0.737135 (-0.528898) | 0.149502 / 0.296338 (-0.146836) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.668457 / 0.215209 (0.453248) | 6.729550 / 2.077655 (4.651895) | 2.741076 / 1.504120 (1.236956) | 2.394737 / 1.541195 (0.853542) | 2.415242 / 1.468490 (0.946752) | 1.322334 / 4.584777 (-3.262442) | 5.787454 / 3.745712 (2.041742) | 3.309847 / 5.269862 (-1.960015) | 2.199181 / 4.565676 (-2.366495) | 0.170740 / 0.424275 (-0.253535) | 0.015095 / 0.007607 (0.007487) | 0.864157 / 0.226044 (0.638112) | 8.701858 / 2.268929 (6.432929) | 3.617966 / 55.444624 (-51.826658) | 2.847144 / 6.876477 (-4.029332) | 3.011391 / 2.142072 (0.869319) | 1.595466 / 4.805227 (-3.209762) | 0.284010 / 6.500664 (-6.216654) | 0.091054 / 0.075469 (0.015585) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.702404 / 1.841788 (-0.139384) | 19.427130 / 8.074308 (11.352822) | 21.900446 / 10.191392 (11.709053) | 0.244088 / 0.680424 (-0.436336) | 0.027428 / 0.534201 (-0.506773) | 0.552226 / 0.579283 (-0.027057) | 0.653102 / 0.434364 (0.218738) | 0.635379 / 0.540337 (0.095042) | 0.771842 / 1.386936 (-0.615094) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006547 / 0.011353 (-0.004806) | 0.004569 / 0.011008 (-0.006439) | 0.097782 / 0.038508 (0.059274) | 0.028157 / 0.023109 (0.005048) | 0.319017 / 0.275898 (0.043119) | 0.340758 / 0.323480 (0.017278) | 0.005078 / 0.007986 (-0.002907) | 0.003343 / 0.004328 (-0.000985) | 0.074194 / 0.004250 (0.069944) | 0.037918 / 0.037052 (0.000866) | 0.310298 / 0.258489 (0.051809) | 0.349441 / 0.293841 (0.055600) | 0.030375 / 0.128546 (-0.098171) | 0.011527 / 0.075646 (-0.064119) | 0.320499 / 0.419271 (-0.098773) | 0.042639 / 0.043533 (-0.000894) | 0.312182 / 0.255139 (0.057043) | 0.329058 / 0.283200 (0.045858) | 0.085517 / 0.141683 (-0.056165) | 1.532603 / 1.452155 (0.080448) | 1.583996 / 1.492716 (0.091279) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208286 / 0.018006 (0.190280) | 0.418696 / 0.000490 (0.418206) | 0.007051 / 0.000200 (0.006851) | 0.000409 / 0.000054 (0.000354) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024055 / 0.037411 (-0.013356) | 0.098420 / 0.014526 (0.083894) | 0.104785 / 0.176557 (-0.071771) | 0.163618 / 0.737135 (-0.573517) | 0.110006 / 0.296338 (-0.186332) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418756 / 0.215209 (0.203547) | 4.179557 / 2.077655 (2.101902) | 1.881708 / 1.504120 (0.377588) | 1.683393 / 1.541195 (0.142198) | 1.731909 / 1.468490 (0.263419) | 0.696674 / 4.584777 (-3.888103) | 3.384167 / 3.745712 (-0.361545) | 3.173479 / 5.269862 (-2.096382) | 1.620019 / 4.565676 (-2.945658) | 0.082850 / 0.424275 (-0.341426) | 0.012396 / 0.007607 (0.004789) | 0.519743 / 0.226044 (0.293699) | 5.208480 / 2.268929 (2.939552) | 2.312917 / 55.444624 (-53.131708) | 1.963486 / 6.876477 (-4.912991) | 2.084553 / 2.142072 (-0.057519) | 0.805486 / 4.805227 (-3.999742) | 0.153429 / 6.500664 (-6.347235) | 0.069451 / 0.075469 (-0.006018) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.197185 / 1.841788 (-0.644603) | 14.341005 / 8.074308 (6.266696) | 14.476162 / 10.191392 (4.284770) | 0.157372 / 0.680424 (-0.523052) | 0.016444 / 0.534201 (-0.517757) | 0.383721 / 0.579283 (-0.195562) | 0.380800 / 0.434364 (-0.053564) | 0.441137 / 0.540337 (-0.099200) | 0.524778 / 1.386936 (-0.862158) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006728 / 0.011353 (-0.004625) | 0.004536 / 0.011008 (-0.006472) | 0.076266 / 0.038508 (0.037757) | 0.028133 / 0.023109 (0.005024) | 0.351072 / 0.275898 (0.075174) | 0.375823 / 0.323480 (0.052344) | 0.005166 / 0.007986 (-0.002819) | 0.004717 / 0.004328 (0.000388) | 0.076130 / 0.004250 (0.071880) | 0.041354 / 0.037052 (0.004301) | 0.345904 / 0.258489 (0.087415) | 0.384119 / 0.293841 (0.090278) | 0.030759 / 0.128546 (-0.097787) | 0.011659 / 0.075646 (-0.063988) | 0.085269 / 0.419271 (-0.334002) | 0.042161 / 0.043533 (-0.001372) | 0.340806 / 0.255139 (0.085667) | 0.366832 / 0.283200 (0.083632) | 0.092187 / 0.141683 (-0.049495) | 1.520035 / 1.452155 (0.067880) | 1.603856 / 1.492716 (0.111140) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237763 / 0.018006 (0.219757) | 0.413406 / 0.000490 (0.412916) | 0.000415 / 0.000200 (0.000215) | 0.000060 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026095 / 0.037411 (-0.011317) | 0.105775 / 0.014526 (0.091249) | 0.108452 / 0.176557 (-0.068105) | 0.160014 / 0.737135 (-0.577122) | 0.112385 / 0.296338 (-0.183953) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437327 / 0.215209 (0.222118) | 4.374949 / 2.077655 (2.297294) | 2.090292 / 1.504120 (0.586172) | 1.885946 / 1.541195 (0.344752) | 1.946768 / 1.468490 (0.478278) | 0.704124 / 4.584777 (-3.880653) | 3.394994 / 3.745712 (-0.350718) | 1.905189 / 5.269862 (-3.364673) | 1.182300 / 4.565676 (-3.383376) | 0.082920 / 0.424275 (-0.341355) | 0.012781 / 0.007607 (0.005174) | 0.535467 / 0.226044 (0.309423) | 5.362799 / 2.268929 (3.093870) | 2.504825 / 55.444624 (-52.939799) | 2.180458 / 6.876477 (-4.696019) | 2.317750 / 2.142072 (0.175677) | 0.811182 / 4.805227 (-3.994045) | 0.151654 / 6.500664 (-6.349010) | 0.067925 / 0.075469 (-0.007544) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.290746 / 1.841788 (-0.551042) | 14.799309 / 8.074308 (6.725001) | 14.439722 / 10.191392 (4.248330) | 0.144358 / 0.680424 (-0.536066) | 0.016688 / 0.534201 (-0.517513) | 0.392907 / 0.579283 (-0.186376) | 0.383109 / 0.434364 (-0.051255) | 0.450069 / 0.540337 (-0.090269) | 0.532534 / 1.386936 (-0.854402) |\n\n</details>\n</details>\n\n\n",
"I turned it into a draft to fix the failing tests, but CI is now green, so there is no good reason for it :)"
] | 2023-04-14T14:13:59Z
| 2023-04-20T14:43:20Z
| 2023-04-20T14:40:34Z
|
COLLABORATOR
| null | null | null |
Return a list of lists instead of a list of NumPy arrays when converting the variable-shaped `ArrayXD` to Python. Additionally, improve the NumPy conversion by returning a numeric NumPy array when the offsets are equal or a NumPy object array when they aren't, and allow converting the variable-shaped `ArrayXD` to Pandas.
(Reported in https://github.com/huggingface/datasets/issues/5719#issuecomment-1507579671)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5751/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5751/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5751.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5751",
"merged_at": "2023-04-20T14:40:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5751.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5751"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5306
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5306/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5306/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5306/events
|
https://github.com/huggingface/datasets/issues/5306
| 1,465,968,639
|
I_kwDODunzps5XYOf_
| 5,306
|
Can't use custom feature description when loading a dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/22726840?v=4",
"events_url": "https://api.github.com/users/clefourrier/events{/privacy}",
"followers_url": "https://api.github.com/users/clefourrier/followers",
"following_url": "https://api.github.com/users/clefourrier/following{/other_user}",
"gists_url": "https://api.github.com/users/clefourrier/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/clefourrier",
"id": 22726840,
"login": "clefourrier",
"node_id": "MDQ6VXNlcjIyNzI2ODQw",
"organizations_url": "https://api.github.com/users/clefourrier/orgs",
"received_events_url": "https://api.github.com/users/clefourrier/received_events",
"repos_url": "https://api.github.com/users/clefourrier/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/clefourrier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clefourrier/subscriptions",
"type": "User",
"url": "https://api.github.com/users/clefourrier",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Forgot to actually convert the feature dict to a Feature object. Closing."
] | 2022-11-28T07:55:44Z
| 2022-11-28T08:11:45Z
| 2022-11-28T08:11:44Z
|
MEMBER
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I have created a feature dictionary to describe my datasets' column types, to use when loading the dataset, following [the doc](https://huggingface.co/docs/datasets/main/en/about_dataset_features). It crashes at dataset load.
### Steps to reproduce the bug
```python
# Creating features
task_list = [f"motif_G{i}" for i in range(19, 53)]
features = {t: Sequence(feature=Value(dtype="float64")) for t in task_list}
for col_name in ["class_label"]:
features[col_name] = Sequence(feature=Value(dtype="int64"))
for col_name in ["num_nodes"]:
features[col_name] = Value(dtype="int64")
for col_name in ["num_bridges", "num_cycles", "avg_shortest_path_len"]:
features[col_name] = Sequence(feature=Value(dtype="float64"))
for col_name in ["edge_attr", "node_feat", "edge_index"]:
features[col_name] = Sequence(feature=Sequence(feature=Value(dtype="int64")))
print(features)
dataset = load_dataset(path=f"graphs-datasets/unbalanced-motifs-500K", split="train", features=features)
```
Last line will crash and say 'TypeError: argument of type 'Sequence' is not iterable'.
Full stack:
```
Traceback (most recent call last):
File "pretrain_tokengt.py", line 131, in <module>
main(output_folder = "../workspace/pretraining",
File "pretrain_tokengt.py", line 52, in main
dataset = load_dataset(path=f"graphs-datasets/{dataset_name}", split="train", features=features)
File "huggingface_env/lib/python3.8/site-packages/datasets/load.py", line 1718, in load_dataset
builder_instance = load_dataset_builder(
File "huggingface_env/lib/python3.8/site-packages/datasets/load.py", line 1514, in load_dataset_builder
builder_instance: DatasetBuilder = builder_cls(
File "huggingface_env/lib/python3.8/site-packages/datasets/builder.py", line 321, in __init__
info.update(self._info())
File "huggingface_env/lib/python3.8/site-packages/datasets/packaged_modules/json/json.py", line 62, in _info
return datasets.DatasetInfo(features=self.config.features)
File "<string>", line 20, in __init__
File "huggingface_env/lib/python3.8/site-packages/datasets/info.py", line 155, in __post_init__
self.features = Features.from_dict(self.features)
File "huggingface_env/lib/python3.8/site-packages/datasets/features/features.py", line 1599, in from_dict
obj = generate_from_dict(dic)
File "huggingface_env/lib/python3.8/site-packages/datasets/features/features.py", line 1282, in generate_from_dict
return {key: generate_from_dict(value) for key, value in obj.items()}
File "huggingface_env/lib/python3.8/site-packages/datasets/features/features.py", line 1282, in <dictcomp>
return {key: generate_from_dict(value) for key, value in obj.items()}
File "huggingface_env/lib/python3.8/site-packages/datasets/features/features.py", line 1281, in generate_from_dict
if "_type" not in obj or isinstance(obj["_type"], dict):
TypeError: argument of type 'Sequence' is not iterable
```
### Expected behavior
For it not to crash.
### Environment info
- `datasets` version: 2.7.1
- Platform: Linux-5.14.0-1054-oem-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/22726840?v=4",
"events_url": "https://api.github.com/users/clefourrier/events{/privacy}",
"followers_url": "https://api.github.com/users/clefourrier/followers",
"following_url": "https://api.github.com/users/clefourrier/following{/other_user}",
"gists_url": "https://api.github.com/users/clefourrier/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/clefourrier",
"id": 22726840,
"login": "clefourrier",
"node_id": "MDQ6VXNlcjIyNzI2ODQw",
"organizations_url": "https://api.github.com/users/clefourrier/orgs",
"received_events_url": "https://api.github.com/users/clefourrier/received_events",
"repos_url": "https://api.github.com/users/clefourrier/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/clefourrier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clefourrier/subscriptions",
"type": "User",
"url": "https://api.github.com/users/clefourrier",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5306/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5306/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/7221
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7221/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7221/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7221/events
|
https://github.com/huggingface/datasets/pull/7221
| 2,582,114,631
|
PR_kwDODunzps5-YNI8
| 7,221
|
add CustomFeature base class to support user-defined features with encoding/decoding logic
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4",
"events_url": "https://api.github.com/users/alex-hh/events{/privacy}",
"followers_url": "https://api.github.com/users/alex-hh/followers",
"following_url": "https://api.github.com/users/alex-hh/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alex-hh",
"id": 5719745,
"login": "alex-hh",
"node_id": "MDQ6VXNlcjU3MTk3NDU=",
"organizations_url": "https://api.github.com/users/alex-hh/orgs",
"received_events_url": "https://api.github.com/users/alex-hh/received_events",
"repos_url": "https://api.github.com/users/alex-hh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alex-hh",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"@lhoestq would you be open to supporting this kind of extensibility?",
"I suggested a fix in https://github.com/huggingface/datasets/issues/7220 that would not necessarily require a parent class for custom features, lmk what you think"
] | 2024-10-11T20:10:27Z
| 2025-01-28T09:40:29Z
| 2025-01-28T09:40:29Z
|
CONTRIBUTOR
| null | null | null |
intended as fix for #7220 if this kind of extensibility is something that datasets is willing to support!
```python
from datasets.features.features import CustomFeature
class ListOfStrs(CustomFeature):
requires_encoding = True
def _encode_example(self, value):
if isinstance(value, str):
return [str]
else:
return value
feats = Features(strlist=ListOfStrs())
feats.encode_example({"strlist": "a"})["strlist"] == feats["strlist"].encode_example("a")
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4",
"events_url": "https://api.github.com/users/alex-hh/events{/privacy}",
"followers_url": "https://api.github.com/users/alex-hh/followers",
"following_url": "https://api.github.com/users/alex-hh/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alex-hh",
"id": 5719745,
"login": "alex-hh",
"node_id": "MDQ6VXNlcjU3MTk3NDU=",
"organizations_url": "https://api.github.com/users/alex-hh/orgs",
"received_events_url": "https://api.github.com/users/alex-hh/received_events",
"repos_url": "https://api.github.com/users/alex-hh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alex-hh",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7221/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7221/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7221.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7221",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7221.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7221"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6895
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6895/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6895/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6895/events
|
https://github.com/huggingface/datasets/pull/6895
| 2,292,993,156
|
PR_kwDODunzps5vRK8P
| 6,895
|
Document that to_json defaults to JSON Lines
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6895). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004914 / 0.011353 (-0.006439) | 0.003621 / 0.011008 (-0.007387) | 0.062841 / 0.038508 (0.024333) | 0.031630 / 0.023109 (0.008520) | 0.247666 / 0.275898 (-0.028232) | 0.288192 / 0.323480 (-0.035288) | 0.003145 / 0.007986 (-0.004841) | 0.002655 / 0.004328 (-0.001674) | 0.049484 / 0.004250 (0.045233) | 0.046593 / 0.037052 (0.009540) | 0.271550 / 0.258489 (0.013061) | 0.293228 / 0.293841 (-0.000613) | 0.026941 / 0.128546 (-0.101606) | 0.009936 / 0.075646 (-0.065710) | 0.201741 / 0.419271 (-0.217530) | 0.035435 / 0.043533 (-0.008098) | 0.251868 / 0.255139 (-0.003271) | 0.272082 / 0.283200 (-0.011118) | 0.019731 / 0.141683 (-0.121952) | 1.125752 / 1.452155 (-0.326403) | 1.152058 / 1.492716 (-0.340659) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099695 / 0.018006 (0.081689) | 0.308306 / 0.000490 (0.307816) | 0.000223 / 0.000200 (0.000023) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018616 / 0.037411 (-0.018795) | 0.061886 / 0.014526 (0.047360) | 0.074059 / 0.176557 (-0.102498) | 0.124902 / 0.737135 (-0.612234) | 0.075108 / 0.296338 (-0.221230) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.336707 / 0.215209 (0.121498) | 2.805197 / 2.077655 (0.727542) | 1.565826 / 1.504120 (0.061706) | 1.443708 / 1.541195 (-0.097486) | 1.341167 / 1.468490 (-0.127323) | 0.566814 / 4.584777 (-4.017963) | 2.374536 / 3.745712 (-1.371176) | 2.804921 / 5.269862 (-2.464941) | 1.739848 / 4.565676 (-2.825829) | 0.062779 / 0.424275 (-0.361496) | 0.005341 / 0.007607 (-0.002266) | 0.326482 / 0.226044 (0.100438) | 3.273460 / 2.268929 (1.004531) | 1.803656 / 55.444624 (-53.640968) | 1.502518 / 6.876477 (-5.373958) | 1.523665 / 2.142072 (-0.618407) | 0.642443 / 4.805227 (-4.162784) | 0.117820 / 6.500664 (-6.382844) | 0.042540 / 0.075469 (-0.032929) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.963399 / 1.841788 (-0.878388) | 11.503648 / 8.074308 (3.429340) | 9.483957 / 10.191392 (-0.707435) | 0.129118 / 0.680424 (-0.551306) | 0.014136 / 0.534201 (-0.520065) | 0.286766 / 0.579283 (-0.292517) | 0.273328 / 0.434364 (-0.161036) | 0.324075 / 0.540337 (-0.216262) | 0.420408 / 1.386936 (-0.966528) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005099 / 0.011353 (-0.006254) | 0.003721 / 0.011008 (-0.007288) | 0.050614 / 0.038508 (0.012106) | 0.031882 / 0.023109 (0.008773) | 0.267619 / 0.275898 (-0.008279) | 0.291874 / 0.323480 (-0.031606) | 0.004254 / 0.007986 (-0.003731) | 0.002766 / 0.004328 (-0.001563) | 0.049291 / 0.004250 (0.045041) | 0.043302 / 0.037052 (0.006249) | 0.274891 / 0.258489 (0.016402) | 0.304977 / 0.293841 (0.011136) | 0.029088 / 0.128546 (-0.099459) | 0.010425 / 0.075646 (-0.065221) | 0.057781 / 0.419271 (-0.361491) | 0.033589 / 0.043533 (-0.009943) | 0.264293 / 0.255139 (0.009154) | 0.284861 / 0.283200 (0.001661) | 0.018025 / 0.141683 (-0.123658) | 1.124954 / 1.452155 (-0.327200) | 1.161957 / 1.492716 (-0.330759) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.103622 / 0.018006 (0.085615) | 0.310915 / 0.000490 (0.310425) | 0.000241 / 0.000200 (0.000041) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022550 / 0.037411 (-0.014862) | 0.076466 / 0.014526 (0.061940) | 0.088297 / 0.176557 (-0.088260) | 0.128659 / 0.737135 (-0.608477) | 0.091823 / 0.296338 (-0.204516) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293431 / 0.215209 (0.078222) | 2.888105 / 2.077655 (0.810450) | 1.559581 / 1.504120 (0.055461) | 1.421424 / 1.541195 (-0.119771) | 1.437941 / 1.468490 (-0.030549) | 0.577544 / 4.584777 (-4.007233) | 0.968840 / 3.745712 (-2.776872) | 2.799796 / 5.269862 (-2.470066) | 1.744791 / 4.565676 (-2.820885) | 0.064159 / 0.424275 (-0.360116) | 0.005043 / 0.007607 (-0.002564) | 0.341039 / 0.226044 (0.114995) | 3.354402 / 2.268929 (1.085474) | 1.904093 / 55.444624 (-53.540532) | 1.604046 / 6.876477 (-5.272431) | 1.610384 / 2.142072 (-0.531688) | 0.658129 / 4.805227 (-4.147098) | 0.119297 / 6.500664 (-6.381367) | 0.041396 / 0.075469 (-0.034073) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.001109 / 1.841788 (-0.840678) | 12.081856 / 8.074308 (4.007548) | 10.090943 / 10.191392 (-0.100449) | 0.150433 / 0.680424 (-0.529991) | 0.015850 / 0.534201 (-0.518351) | 0.286590 / 0.579283 (-0.292693) | 0.131137 / 0.434364 (-0.303227) | 0.389033 / 0.540337 (-0.151304) | 0.421382 / 1.386936 (-0.965554) |\n\n</details>\n</details>\n\n\n"
] | 2024-05-13T14:22:34Z
| 2024-05-16T14:37:25Z
| 2024-05-16T14:31:26Z
|
MEMBER
| null | null | null |
Document that `Dataset.to_json` defaults to JSON Lines, by adding explanation in the corresponding docstring.
Fix #6894.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6895/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6895/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6895.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6895",
"merged_at": "2024-05-16T14:31:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6895.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6895"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5209
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5209/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5209/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5209/events
|
https://github.com/huggingface/datasets/issues/5209
| 1,438,367,678
|
I_kwDODunzps5Vu7--
| 5,209
|
Implement ability to define splits in metadata section of dataset card
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/53175384?v=4",
"events_url": "https://api.github.com/users/merveenoyan/events{/privacy}",
"followers_url": "https://api.github.com/users/merveenoyan/followers",
"following_url": "https://api.github.com/users/merveenoyan/following{/other_user}",
"gists_url": "https://api.github.com/users/merveenoyan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/merveenoyan",
"id": 53175384,
"login": "merveenoyan",
"node_id": "MDQ6VXNlcjUzMTc1Mzg0",
"organizations_url": "https://api.github.com/users/merveenoyan/orgs",
"received_events_url": "https://api.github.com/users/merveenoyan/received_events",
"repos_url": "https://api.github.com/users/merveenoyan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/merveenoyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/merveenoyan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/merveenoyan",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] | null |
[
"@merveenoyan Do you want different files to be splits or configurations?\r\n\r\nFrom [what you specified in `Readme.md`](https://huggingface.co/datasets/inria-soda/tabular-benchmark/commit/fb4575853772c62a20203bdd6cc0202f5db4ce4e) I hypothesize that you want to have 4 **configs** corresponding to directories: `\"clf_cat\", \"clf_num\", \"reg_cat\", \"reg_num\"`. And inside each config you require to have as many splits as there are `csv` files\r\nso if you run \r\n```python\r\nload_dataset(\"inria-soda/tabular-benchmark\", \"clf_cat\", split=\"compass\")\r\n```\r\nyou will generate the data only from `compass.csv` file.\r\nIn this case, running `load_dataset(\"inria-soda/tabular-benchmark\", \"clf_cat\"`) without split parameter will return `DatasetDict` object with `\"KDDCup09_upselling\", \"cat_compass\", \"cat_covertype\", ... \"road_safety\"` keys (which values are splits - `Dataset` objects)\r\n\r\n**or**\r\ndo you want each file to be a separate config? Like:\r\n```python\r\nload_dataset(\"inria-soda/tabular-benchmark\", \"clf_cat_compass\") # returns DatasetDict with a single \"train\" split\r\n```\r\n**or**\r\nmaybe smth completely different? :smile: \r\n\r\nAnyway, now I have an impression that this is probably rather a matter of automatically inferring configs from repository structure rather than providing parameters in metadata yaml.\r\n",
"@polinaeterna I want the latter where you can think of every CSV file as a config, like MNLI from GLUE.",
"@merveenoyan @lhoestq I see two solutions to this case. \r\n1. Parse configurations automatically from directories names. That is, if you have data structure like:\r\n```\r\ntabular-benchmark\r\n └─clf_cat_compass\r\n └─compass.csv\r\n └─clf_cat_cat_covertype\r\n └─covertype.csv\r\n ...\r\n └─reg_cat_house_sales\r\n └─house_sales.csv\r\n```\r\nyou'll get \"clf_cat_compass\", \"clf_cat_cat_covertype\", ... \"reg_cat_house_sales\" configurations that would contain **only files from corresponding directories**. \r\n**\\+** this is a requested change and needed in general and would solve other problems, see https://github.com/huggingface/datasets/issues/4578, would also help with https://github.com/huggingface/datasets/pull/5213 which I'm working on currently\r\n**\\+** would allow users to do just `load_dataset(“inria-soda/tabular-benchmark”, “clf_cat_compass”)`, no `data_files` param required\r\n**\\-** in this specific case it would require restructuring of the data - putting each file in a directory named as a config name (to me personally it doesn't seem to be a big deal) \r\n\r\n2. More or less what we discussed before - add support for manually specifying parameters in the metadata. We can add new metadata yaml field (say, `\"custom_configs_info\"`), so that we can provide smth like:\r\n```yaml\r\n---\r\n...\r\ndataset_info:\r\n ... \r\ncustom_configs_info:\r\n- config_name: reg_cat_house_sales\r\n data_files:\r\n - reg_cat/house_sales.csv\r\n- config_name: clf_cat_compass\r\n data_files:\r\n - clf_cat/compass.csv\r\n...\r\n---\r\n```\r\n**\\+** Would be useful not only for tabular data and not only for `data_files` parameter - any packaged dataset’s viewer can be customized to use specific, non-default parameters. @merveenoyan do you maybe have any other examples/use cases in mind where you want to provide any specific parameters to the viewer? \r\n**\\-** I'm not sure here but assume that it might require changes in interaction with the viewer on the hub side - to parse these configurations, as they not default configurations (not in `BUILDER_CONFIGS` list). cc @severo But probably this can be solved on the `datasets` side too.\r\n\r\nOverall, I would start from implementing the first solution since it's related to what I'm doing now and is super useful for `datasets` in general. And then if we agree that having more flexibility in providing parameters to the viewer is required, I can implement the second one. Let me know what you think :) ",
"> We can add new metadata yaml field (say, \"custom_configs_info\"), so that we can provide smth like:\r\n\r\nLove it ! Some other ideas to name the \"custom_configs_info\" field: \"configs\", \"parameters\", \"config_args\", \"configurations\"\r\n\r\n> it might require changes in interaction with the viewer on the hub side - to parse these configurations, as they not default configurations (not in BUILDER_CONFIGS list)\r\n\r\nIf we update the `get_dataset_config_names()` function in `datasets` in inspect.py we should be fine - that's what the viewer is using\r\n\r\n> Overall, I would start from implementing the first solution since it's related to what I'm doing now and is super useful for datasets in general. And then if we agree that having more flexibility in providing parameters to the viewer is required, I can implement the second one. Let me know what you think :)\r\n\r\nActually I feel like the second solution includes the first use case you mentioned. If you implement the second solution, then users would just have to add a few lines of YAML and their directories would be considered configurations no ? Maybe there's no need to implement two different logics to do the same thing",
"is there any update on this? 🕵🏻",
"@merveenoyan I haven't started working on this yet, working on adding configs to packaged datasets instead: https://github.com/huggingface/datasets/pull/5213 because this both would allow you to solve your issue and is a frequently requested feature.\r\n\r\nadding arbitrary parameters to yaml would be my next task i think!",
"@merveenoyan ignore my comment above, I'm switching to this task now :D",
"I want to be able to create folders in a model.",
"Addressed in #5331 "
] | 2022-11-07T13:27:16Z
| 2023-07-21T14:36:02Z
| 2023-07-21T14:36:01Z
|
CONTRIBUTOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
If you go here: https://huggingface.co/datasets/inria-soda/tabular-benchmark/tree/main you will see bunch of folders that has various CSV files. I’d like dataset viewer to show these files instead of only one dataset like it currently does. (and also people to be able to load them as splits instead of loading through `data_files`)
e.g GLUE has various splits on viewer but it’s too overkill to ask people to implement loading script, so it would be better to let them define these in the README file instead.
Also pinging @polinaeterna @lhoestq @adrinjalali
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5209/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5209/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5526
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5526/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5526/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5526/events
|
https://github.com/huggingface/datasets/pull/5526
| 1,580,488,133
|
PR_kwDODunzps5JwVol
| 5,526
|
Allow loading/saving of FAISS index using fsspec
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4",
"events_url": "https://api.github.com/users/Dref360/events{/privacy}",
"followers_url": "https://api.github.com/users/Dref360/followers",
"following_url": "https://api.github.com/users/Dref360/following{/other_user}",
"gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Dref360",
"id": 8976546,
"login": "Dref360",
"node_id": "MDQ6VXNlcjg5NzY1NDY=",
"organizations_url": "https://api.github.com/users/Dref360/orgs",
"received_events_url": "https://api.github.com/users/Dref360/received_events",
"repos_url": "https://api.github.com/users/Dref360/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dref360/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Dref360",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the quick review! I updated the code with your suggestion",
"Thanks for the quick review @albertvillanova! I updated the code with your suggestions",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008577 / 0.011353 (-0.002776) | 0.005714 / 0.011008 (-0.005294) | 0.114718 / 0.038508 (0.076210) | 0.039799 / 0.023109 (0.016690) | 0.387530 / 0.275898 (0.111632) | 0.395739 / 0.323480 (0.072259) | 0.006775 / 0.007986 (-0.001211) | 0.006280 / 0.004328 (0.001952) | 0.086470 / 0.004250 (0.082220) | 0.054424 / 0.037052 (0.017371) | 0.361989 / 0.258489 (0.103500) | 0.424678 / 0.293841 (0.130837) | 0.043081 / 0.128546 (-0.085465) | 0.013903 / 0.075646 (-0.061743) | 0.397625 / 0.419271 (-0.021647) | 0.059789 / 0.043533 (0.016256) | 0.375195 / 0.255139 (0.120056) | 0.403724 / 0.283200 (0.120524) | 0.121470 / 0.141683 (-0.020213) | 1.734496 / 1.452155 (0.282341) | 1.820479 / 1.492716 (0.327763) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.239672 / 0.018006 (0.221665) | 0.499373 / 0.000490 (0.498883) | 0.005034 / 0.000200 (0.004834) | 0.000099 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033000 / 0.037411 (-0.004411) | 0.130930 / 0.014526 (0.116404) | 0.151690 / 0.176557 (-0.024866) | 0.211839 / 0.737135 (-0.525296) | 0.148727 / 0.296338 (-0.147612) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.480592 / 0.215209 (0.265382) | 4.809700 / 2.077655 (2.732046) | 2.232414 / 1.504120 (0.728294) | 2.035432 / 1.541195 (0.494237) | 2.115991 / 1.468490 (0.647501) | 0.817841 / 4.584777 (-3.766936) | 4.718035 / 3.745712 (0.972323) | 4.107102 / 5.269862 (-1.162759) | 2.166838 / 4.565676 (-2.398839) | 0.102207 / 0.424275 (-0.322068) | 0.014686 / 0.007607 (0.007079) | 0.599922 / 0.226044 (0.373877) | 5.985840 / 2.268929 (3.716912) | 2.769199 / 55.444624 (-52.675425) | 2.427095 / 6.876477 (-4.449382) | 2.586666 / 2.142072 (0.444593) | 0.987650 / 4.805227 (-3.817578) | 0.199419 / 6.500664 (-6.301245) | 0.076710 / 0.075469 (0.001240) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.454509 / 1.841788 (-0.387278) | 18.267849 / 8.074308 (10.193541) | 16.701880 / 10.191392 (6.510488) | 0.204225 / 0.680424 (-0.476199) | 0.020295 / 0.534201 (-0.513906) | 0.504254 / 0.579283 (-0.075029) | 0.535071 / 0.434364 (0.100707) | 0.611825 / 0.540337 (0.071488) | 0.697289 / 1.386936 (-0.689647) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009141 / 0.011353 (-0.002211) | 0.005987 / 0.011008 (-0.005021) | 0.092003 / 0.038508 (0.053495) | 0.043239 / 0.023109 (0.020130) | 0.400425 / 0.275898 (0.124527) | 0.464849 / 0.323480 (0.141369) | 0.008256 / 0.007986 (0.000270) | 0.006251 / 0.004328 (0.001923) | 0.095263 / 0.004250 (0.091013) | 0.057899 / 0.037052 (0.020847) | 0.402899 / 0.258489 (0.144410) | 0.477411 / 0.293841 (0.183570) | 0.044122 / 0.128546 (-0.084424) | 0.014158 / 0.075646 (-0.061489) | 0.116354 / 0.419271 (-0.302917) | 0.061045 / 0.043533 (0.017512) | 0.411635 / 0.255139 (0.156497) | 0.466281 / 0.283200 (0.183082) | 0.129423 / 0.141683 (-0.012260) | 1.799790 / 1.452155 (0.347635) | 2.004578 / 1.492716 (0.511862) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224012 / 0.018006 (0.206006) | 0.502972 / 0.000490 (0.502482) | 0.003560 / 0.000200 (0.003360) | 0.000110 / 0.000054 (0.000056) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034794 / 0.037411 (-0.002618) | 0.139646 / 0.014526 (0.125120) | 0.144330 / 0.176557 (-0.032226) | 0.202528 / 0.737135 (-0.534607) | 0.151561 / 0.296338 (-0.144777) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.504343 / 0.215209 (0.289133) | 5.050690 / 2.077655 (2.973035) | 2.433107 / 1.504120 (0.928987) | 2.197443 / 1.541195 (0.656248) | 2.331225 / 1.468490 (0.862734) | 0.834066 / 4.584777 (-3.750711) | 4.837648 / 3.745712 (1.091936) | 4.105672 / 5.269862 (-1.164189) | 2.281557 / 4.565676 (-2.284120) | 0.102257 / 0.424275 (-0.322018) | 0.014425 / 0.007607 (0.006818) | 0.629290 / 0.226044 (0.403245) | 6.251513 / 2.268929 (3.982585) | 2.959012 / 55.444624 (-52.485613) | 2.570031 / 6.876477 (-4.306446) | 2.657525 / 2.142072 (0.515453) | 1.002861 / 4.805227 (-3.802367) | 0.199326 / 6.500664 (-6.301338) | 0.078428 / 0.075469 (0.002958) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.579587 / 1.841788 (-0.262201) | 18.567509 / 8.074308 (10.493201) | 17.162144 / 10.191392 (6.970752) | 0.193460 / 0.680424 (-0.486964) | 0.020819 / 0.534201 (-0.513382) | 0.501929 / 0.579283 (-0.077354) | 0.508039 / 0.434364 (0.073675) | 0.582656 / 0.540337 (0.042319) | 0.693624 / 1.386936 (-0.693312) |\n\n</details>\n</details>\n\n\n"
] | 2023-02-10T23:37:14Z
| 2023-03-27T15:26:46Z
| 2023-03-27T15:18:20Z
|
CONTRIBUTOR
| null | null | null |
Fixes #5428
Allow loading/saving of FAISS index using fsspec:
1. Simply use BufferedIOWriter/Reader to Read/Write indices on fsspec stream.
2. Needed `mockfs` in the test, so I took it out of the `TestCase`. Let me know if that makes sense.
I can work on the documentation once the code changes are approved.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5526/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5526/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5526.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5526",
"merged_at": "2023-03-27T15:18:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5526.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5526"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6938
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6938/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6938/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6938/events
|
https://github.com/huggingface/datasets/pull/6938
| 2,327,568,281
|
PR_kwDODunzps5xHNKm
| 6,938
|
Fix expected splits when passing data_files or dir
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6938). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"fix is included in https://github.com/huggingface/datasets/pull/6925"
] | 2024-05-31T11:04:22Z
| 2024-05-31T15:28:03Z
| 2024-05-31T15:28:02Z
|
MEMBER
| null | null | null |
reported on slack:
The following code snippet gives an error with v2.19 but not with v2.18:
from datasets import load_dataset
```
dataset = load_dataset(
"lvwerra/stack-exchange-paired",
split="train",
cache_dir=None,
data_dir="data/rl",
)
```
and the error is:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 2609, in load_dataset
builder_instance.download_and_prepare(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1027, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1140, in _download_and_prepare
verify_splits(self.info.splits, split_dict)
File "/usr/local/lib/python3.10/dist-packages/datasets/utils/info_utils.py", line 92, in verify_splits
raise ExpectedMoreSplits(str(set(expected_splits) - set(recorded_splits)))
datasets.utils.info_utils.ExpectedMoreSplits: {'test'}
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6938/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6938/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6938.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6938",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6938.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6938"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4821
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4821/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4821/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4821/events
|
https://github.com/huggingface/datasets/pull/4821
| 1,335,664,588
|
PR_kwDODunzps49AvaE
| 4,821
|
Fix train_test_split docs
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/NielsRogge",
"id": 48327001,
"login": "NielsRogge",
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"type": "User",
"url": "https://api.github.com/users/NielsRogge",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-08-11T08:55:45Z
| 2022-08-11T09:59:29Z
| 2022-08-11T09:45:40Z
|
CONTRIBUTOR
| null | null | null |
I saw that `stratify` is added to the `train_test_split` method as per #4322, hence the docs can be updated.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4821/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4821/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4821.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4821",
"merged_at": "2022-08-11T09:45:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4821.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4821"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6303
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6303/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6303/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6303/events
|
https://github.com/huggingface/datasets/issues/6303
| 1,943,466,532
|
I_kwDODunzps5z1vIk
| 6,303
|
Parquet uploads off-by-one naming scheme
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1981179?v=4",
"events_url": "https://api.github.com/users/ZachNagengast/events{/privacy}",
"followers_url": "https://api.github.com/users/ZachNagengast/followers",
"following_url": "https://api.github.com/users/ZachNagengast/following{/other_user}",
"gists_url": "https://api.github.com/users/ZachNagengast/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ZachNagengast",
"id": 1981179,
"login": "ZachNagengast",
"node_id": "MDQ6VXNlcjE5ODExNzk=",
"organizations_url": "https://api.github.com/users/ZachNagengast/orgs",
"received_events_url": "https://api.github.com/users/ZachNagengast/received_events",
"repos_url": "https://api.github.com/users/ZachNagengast/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ZachNagengast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZachNagengast/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ZachNagengast",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"You can find the reasoning behind this naming scheme [here](https://github.com/huggingface/transformers/pull/16343#discussion_r931182168).\r\n\r\nThis point has been raised several times, so I'd be okay with starting with `00001-` (also to be consistent with the `transformers` sharding), but I'm not sure @lhoestq agrees.",
"We start at 0 in `datasets` for consistency with Apache Spark, Apache Beam, Dask and others.\r\n\r\nAlso note `transformers` isn't a good reference on this topic. I talked with the maintainers when they added shards but it was already released this way. Though we found that there is a backward-compatible way in `transformers` to start at 0, but no request from `transformers` users to changes this AFAIK.",
"not sure it would be a good idea to break the consistency now, IMO",
"Makes sense to start at 0 for plenty of good reasons so I'm on board.\r\n\r\nWhat about the second part `-of-0000X`? With single commit PR #6269 just getting merged, there was a note about issues with 100+ file edits https://github.com/huggingface/datasets/pull/6269#issuecomment-1755428581.\r\n\r\nThat would be my last remaining concern in the context of the `push_to_hub(..., append=True)` work to be done, where appending a single file to the full dataset will require renaming every other existing file in the dataset. If it doesn't seem like a big issue for this work then all the better 👍"
] | 2023-10-14T18:31:03Z
| 2023-10-16T16:33:21Z
| null |
CONTRIBUTOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I noticed this numbering scheme not matching up in a different project and wanted to raise it as an issue for discussion, what is the actual proper way to have these stored?
<img width="425" alt="image" src="https://github.com/huggingface/datasets/assets/1981179/3ffa2144-7c9a-446f-b521-a5e9db71e7ce">
The `-SSSSS-of-NNNNN` seems to be used widely across the codebase. The section that creates the part in my screenshot is here https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_dataset.py#L5287
There are also some edits to this section in the single commit branch.
### Steps to reproduce the bug
1. Upload a dataset that requires at least two parquet files in it
2. Observe the naming scheme
### Expected behavior
The couple options here are of course **1. keeping it as is**
**2. Starting the index at 1:**
train-00001-of-00002-{hash}.parquet
train-00002-of-00002-{hash}.parquet
**3. My preferred option** (which would solve my specific issue), dropping the total entirely:
train-00000-{hash}.parquet
train-00001-{hash}.parquet
This also solves an issue that will occur with an `append` variable for `push_to_hub` (see https://github.com/huggingface/datasets/issues/6290) where as you add a new parquet file, you need to rename everything in the repo as well.
However, I know there are parts of the repo that use 0 as the starting file or may require the total, so raising the question for discussion.
### Environment info
- `datasets` version: 2.14.6.dev0
- Platform: macOS-14.0-arm64-arm-64bit
- Python version: 3.10.12
- Huggingface_hub version: 0.18.0
- PyArrow version: 12.0.1
- Pandas version: 1.5.3
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6303/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6303/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/4889
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4889/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4889/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4889/events
|
https://github.com/huggingface/datasets/issues/4889
| 1,349,758,525
|
I_kwDODunzps5Qc649
| 4,889
|
torchaudio 11.0 yields different results than torchaudio 12.1 when loading MP3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[
"Maybe we can just pass this along to torchaudio @lhoestq @albertvillanova ? It be great if you could investigate if the errors lies in datasets or in torchaudio.",
"torchaudio did a change in [0.12](https://github.com/pytorch/audio/releases/tag/v0.12.0) on MP3 decoding (which affects common voice):\r\n> MP3 decoding is now handled by FFmpeg in sox_io backend. (https://github.com/pytorch/audio/pull/2419, https://github.com/pytorch/audio/pull/2428)\r\n> - FFmpeg is now used as fallback in sox_io backend, and now MP3 decoding is handled by FFmpeg. To load MP3 audio with torchaudio.load, please install a compatible version of FFmpeg (Version 4 when using an official binary distribution).\r\n> - Note that, whereas the previous MP3 decoding scheme pads the output audio, the new scheme does not. As a consequence, the new version returns shorter audio tensors.",
"Do we have a solution for this now? Should we just upgrade to `torchaudio 0.12.0` then? ",
"`datasets` supports `torchaudio` 0.12 if you have an environment that supports reading MP3 with `torchaudio`, i.e. if you have `ffmpeg>=4`",
"Closing as we no longer use `torchaudio` for decoding."
] | 2022-08-24T16:54:43Z
| 2023-03-02T15:33:05Z
| 2023-03-02T15:33:04Z
|
CONTRIBUTOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
## Describe the bug
When loading Common Voice with torchaudio 0.11.0 the results are different to 0.12.1 which leads to problems in transformers see: https://github.com/huggingface/transformers/pull/18749
## Steps to reproduce the bug
If you run the following code once with `torchaudio==0.11.0+cu102` and `torchaudio==0.12.1+cu102` you can see that the tensors differ. This is a pretty big breaking change and makes some integration tests fail in Transformers.
```python
#!/usr/bin/env python3
from datasets import load_dataset
import datasets
import numpy as np
import torch
import torchaudio
print("torch vesion", torch.__version__)
print("torchaudio vesion", torchaudio.__version__)
save_audio = True
load_audios = False
if save_audio:
ds = load_dataset("common_voice", "en", split="train", streaming=True)
ds = ds.cast_column("audio", datasets.Audio(sampling_rate=16_000))
ds_iter = iter(ds)
sample = next(ds_iter)
np.save(f"audio_sample_{torch.__version__}", sample["audio"]["array"])
print(sample["audio"]["array"])
if load_audios:
array_torch_11 = np.load("/home/patrick/audio_sample_1.11.0+cu102.npy")
print("Array 11 Shape", array_torch_11.shape)
print("Array 11 abs sum", np.sum(np.abs(array_torch_11)))
array_torch_12 = np.load("/home/patrick/audio_sample_1.12.1+cu102.npy")
print("Array 12 Shape", array_torch_12.shape)
print("Array 12 abs sum", np.sum(np.abs(array_torch_12)))
```
Having saved the tensors the print output yields:
```
torch vesion 1.12.1+cu102
torchaudio vesion 0.12.1+cu102
Array 11 Shape (122880,)
Array 11 abs sum 1396.4988
Array 12 Shape (123264,)
Array 12 abs sum 1396.5193
```
## Expected results
torchaudio 11.0 and 12.1 should yield same results.
## Actual results
See above.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.1.1.dev0
- Platform: Linux-5.18.10-76051810-generic-x86_64-with-glibc2.34
- Python version: 3.9.7
- PyArrow version: 6.0.1
- Pandas version: 1.4.2
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4889/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4889/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5745
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5745/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5745/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5745/events
|
https://github.com/huggingface/datasets/pull/5745
| 1,667,086,143
|
PR_kwDODunzps5ORE2n
| 5,745
|
[BUG FIX] Issue 5744
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/15572698?v=4",
"events_url": "https://api.github.com/users/keyboardAnt/events{/privacy}",
"followers_url": "https://api.github.com/users/keyboardAnt/followers",
"following_url": "https://api.github.com/users/keyboardAnt/following{/other_user}",
"gists_url": "https://api.github.com/users/keyboardAnt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/keyboardAnt",
"id": 15572698,
"login": "keyboardAnt",
"node_id": "MDQ6VXNlcjE1NTcyNjk4",
"organizations_url": "https://api.github.com/users/keyboardAnt/orgs",
"received_events_url": "https://api.github.com/users/keyboardAnt/received_events",
"repos_url": "https://api.github.com/users/keyboardAnt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/keyboardAnt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/keyboardAnt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/keyboardAnt",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Have met the same problem with datasets==2.8.0, pandas==2.0.0. It could be solved by installing the latest version of datasets or using datasets==2.8.0, pandas==1.5.3.",
"Pandas 2.0.0 has removed support to passing `mangle_dupe_cols`.\r\n\r\nHowever, our `datasets` library does not use this parameter: it only passes it to pandas if the user passes it to `load_dataset`.\r\n\r\nYou should better:\r\n- Either \"take steps to stop the use of 'mangle_dupe_cols'\" (as it was suggested in the deprecation warning in pandas-1.5.3)\r\n- Or pin pandas (< 2.0.0) in your local requirements file\r\n\r\nPlease note that from `datasets` library, we don't want to force users to use a specific pandas version. We would like to support users as well:\r\n- that use pandas < 1.5.3\r\n- that use pandas >= 2.0.0 and that do not pass the 'mangle_dupe_cols' parameter",
"`datasets` 2.11 doesn't pass `mangle_dupe_cols` unless the user specifies it indeed, so I think we're fine"
] | 2023-04-13T20:29:55Z
| 2023-04-21T15:22:43Z
| null |
NONE
| null | null | null |
A temporal fix for https://github.com/huggingface/datasets/issues/5744.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5745/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5745/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5745.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5745",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5745.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5745"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5168
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5168/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5168/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5168/events
|
https://github.com/huggingface/datasets/pull/5168
| 1,424,368,572
|
PR_kwDODunzps5BmYnq
| 5,168
|
Fix CI require beam
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I'm merging this PR because it is quite a trivial fix and this is required by:\r\n- #5166"
] | 2022-10-26T16:49:33Z
| 2022-10-27T09:25:19Z
| 2022-10-27T09:23:26Z
|
MEMBER
| null | null | null |
This PR:
- Fixes the CI `require_beam`: before it was requiring PyTorch instead
```python
def require_beam(test_case):
if not config.TORCH_AVAILABLE:
test_case = unittest.skip("test requires PyTorch")(test_case)
return test_case
```
- Fixes a missing `require_beam` in `test_beam_based_builder_download_and_prepare_as_parquet`
- Refactors `require_beam` to use `pytest` (`skipif`) instead
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5168/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5168/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5168.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5168",
"merged_at": "2022-10-27T09:23:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5168.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5168"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4972
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4972/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4972/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4972/events
|
https://github.com/huggingface/datasets/pull/4972
| 1,371,443,306
|
PR_kwDODunzps4-3VVF
| 4,972
|
Fix map batched with torch output
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-13T13:16:34Z
| 2022-09-20T09:42:02Z
| 2022-09-20T09:39:33Z
|
MEMBER
| null | null | null |
Reported in https://discuss.huggingface.co/t/typeerror-when-applying-map-after-set-format-type-torch/23067/2
Currently it fails if one uses batched `map` and the map function returns a torch tensor.
I fixed it for torch, tf, jax and pandas series.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4972/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4972/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4972.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4972",
"merged_at": "2022-09-20T09:39:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4972.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4972"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7411
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7411/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7411/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7411/events
|
https://github.com/huggingface/datasets/pull/7411
| 2,858,993,390
|
PR_kwDODunzps6LhV0Z
| 7,411
|
Attempt to fix multiprocessing hang by closing and joining the pool before termination
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/43149077?v=4",
"events_url": "https://api.github.com/users/dakinggg/events{/privacy}",
"followers_url": "https://api.github.com/users/dakinggg/followers",
"following_url": "https://api.github.com/users/dakinggg/following{/other_user}",
"gists_url": "https://api.github.com/users/dakinggg/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dakinggg",
"id": 43149077,
"login": "dakinggg",
"node_id": "MDQ6VXNlcjQzMTQ5MDc3",
"organizations_url": "https://api.github.com/users/dakinggg/orgs",
"received_events_url": "https://api.github.com/users/dakinggg/received_events",
"repos_url": "https://api.github.com/users/dakinggg/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dakinggg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dakinggg/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dakinggg",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7411). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Thanks for the fix! We have been affected by this a lot when we try to use LLM Foundry with larger multimodal ICL datasets. ",
"@lorabit110 are you able to test it out for your case as well? Would be great to get a second validation that it actually fixes the issue. Thanks!"
] | 2025-02-17T23:58:03Z
| 2025-02-19T21:11:24Z
| 2025-02-19T13:40:32Z
|
CONTRIBUTOR
| null | null | null |
https://github.com/huggingface/datasets/issues/6393 has plagued me on and off for a very long time. I have had various workarounds (one time combining two filter calls into one filter call removed the issue, another time making rank 0 go first resolved a cache race condition, one time i think upgrading the version of something resolved it). I don't know hf datasets well enough to fully understand the root cause, but I _think_ this PR fixes it.
Evidence: I have an LLM Foundry training yaml/script (datasets version 3.2.0) that results in a hang ~1/10 times (for a baseline for this testing, it was 2/36 runs that hung). I also reran with the latest datasets version (3.3.1) and got 4/36 hung. Installing datasets from this PR, I was able to successful run the script 144 times without a hang occurring. Assuming the base probability is 1/10, this should be more than enough times to have confidence it works.
After adding some logging, I could see that the code hung during the __exit__ of the mp pool context manager, after all shards had been processed, and the tqdm context manager had exited.
My best explanation: When multiprocessing pool __exit__ is called, it calls pool.terminate, which forcefully exits all the processes (and calls code related to this that I haven't looked at closely). I'm guessing this forceful termination has a bad interaction with some multithreading/multiprocessing that hf datasets does. If we instead call pool.close and pool.join before the pool.terminate happens, perhaps whatever that bad interaction is is able to complete gracefully, and then terminate call proceeds without issue.
If this PR seems good to you, I'd be very appreciative if you were able to do a patch release including it. Thank you!
@lhoestq
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7411/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7411/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7411.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7411",
"merged_at": "2025-02-19T13:40:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7411.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7411"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7448
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7448/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7448/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7448/events
|
https://github.com/huggingface/datasets/issues/7448
| 2,916,025,762
|
I_kwDODunzps6tzwWi
| 7,448
|
`datasets.disable_caching` doesn't work
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/35629974?v=4",
"events_url": "https://api.github.com/users/UCC-team/events{/privacy}",
"followers_url": "https://api.github.com/users/UCC-team/followers",
"following_url": "https://api.github.com/users/UCC-team/following{/other_user}",
"gists_url": "https://api.github.com/users/UCC-team/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/UCC-team",
"id": 35629974,
"login": "UCC-team",
"node_id": "MDQ6VXNlcjM1NjI5OTc0",
"organizations_url": "https://api.github.com/users/UCC-team/orgs",
"received_events_url": "https://api.github.com/users/UCC-team/received_events",
"repos_url": "https://api.github.com/users/UCC-team/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/UCC-team/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/UCC-team/subscriptions",
"type": "User",
"url": "https://api.github.com/users/UCC-team",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"cc",
"Yes I have the same issue. It's a confusingly named function. See [here](https://github.com/huggingface/datasets/blob/main/src/datasets/fingerprint.py#L115-L130)\n\n```\n...\nIf disabled, the library will no longer reload cached datasets files when applying transforms to the datasets.\n More precisely, if the caching is disabled:\n - cache files are always recreated\n - cache files are written to a temporary directory that is deleted when session closes\n - cache files are named using a random hash instead of the dataset fingerprint\n```\n\nAlso, unfortunately the member variable `ds.cache_files` is not populated either.\n\nI'll let you know if I find a solution."
] | 2025-03-13T06:40:12Z
| 2025-03-22T04:37:07Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
When I use `Dataset.from_generator(my_gen)` to load my dataset, it simply skips my changes to the generator function.
I tried `datasets.disable_caching`, but it doesn't work!
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7448/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7448/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/7348
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7348/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7348/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7348/events
|
https://github.com/huggingface/datasets/pull/7348
| 2,766,128,230
|
PR_kwDODunzps6Gldcy
| 7,348
|
Catch OSError for arrow
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7348). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-01-02T14:30:00Z
| 2025-01-09T14:25:06Z
| 2025-01-09T14:25:04Z
|
MEMBER
| null | null | null |
fixes https://github.com/huggingface/datasets/issues/7346
(also updated `ruff` and appleid style changes)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7348/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7348/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7348.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7348",
"merged_at": "2025-01-09T14:25:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7348.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7348"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6946
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6946/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6946/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6946/events
|
https://github.com/huggingface/datasets/pull/6946
| 2,330,276,848
|
PR_kwDODunzps5xQNao
| 6,946
|
Re-enable import sorting disabled by flake8:noqa directive when using ruff linter
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6946). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004847 / 0.011353 (-0.006506) | 0.003199 / 0.011008 (-0.007810) | 0.060677 / 0.038508 (0.022169) | 0.030544 / 0.023109 (0.007435) | 0.240870 / 0.275898 (-0.035028) | 0.261320 / 0.323480 (-0.062160) | 0.002816 / 0.007986 (-0.005170) | 0.002483 / 0.004328 (-0.001845) | 0.048527 / 0.004250 (0.044277) | 0.045496 / 0.037052 (0.008444) | 0.251296 / 0.258489 (-0.007193) | 0.285746 / 0.293841 (-0.008095) | 0.025076 / 0.128546 (-0.103470) | 0.009417 / 0.075646 (-0.066229) | 0.191361 / 0.419271 (-0.227911) | 0.033778 / 0.043533 (-0.009755) | 0.235581 / 0.255139 (-0.019558) | 0.261069 / 0.283200 (-0.022131) | 0.018255 / 0.141683 (-0.123428) | 1.098437 / 1.452155 (-0.353718) | 1.127124 / 1.492716 (-0.365592) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.004479 / 0.018006 (-0.013527) | 0.283706 / 0.000490 (0.283216) | 0.000214 / 0.000200 (0.000014) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018364 / 0.037411 (-0.019048) | 0.058398 / 0.014526 (0.043872) | 0.073056 / 0.176557 (-0.103501) | 0.117147 / 0.737135 (-0.619989) | 0.073683 / 0.296338 (-0.222656) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.265121 / 0.215209 (0.049912) | 2.636981 / 2.077655 (0.559327) | 1.380192 / 1.504120 (-0.123928) | 1.270779 / 1.541195 (-0.270416) | 1.295729 / 1.468490 (-0.172762) | 0.523768 / 4.584777 (-4.061009) | 2.295720 / 3.745712 (-1.449992) | 2.519211 / 5.269862 (-2.750650) | 1.618712 / 4.565676 (-2.946965) | 0.058321 / 0.424275 (-0.365954) | 0.004492 / 0.007607 (-0.003115) | 0.316101 / 0.226044 (0.090057) | 3.169913 / 2.268929 (0.900984) | 1.793412 / 55.444624 (-53.651213) | 1.473784 / 6.876477 (-5.402693) | 1.565325 / 2.142072 (-0.576748) | 0.592734 / 4.805227 (-4.212493) | 0.109333 / 6.500664 (-6.391331) | 0.039063 / 0.075469 (-0.036406) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.935504 / 1.841788 (-0.906284) | 10.865520 / 8.074308 (2.791212) | 9.219337 / 10.191392 (-0.972055) | 0.135284 / 0.680424 (-0.545140) | 0.013664 / 0.534201 (-0.520537) | 0.271601 / 0.579283 (-0.307682) | 0.260456 / 0.434364 (-0.173908) | 0.302931 / 0.540337 (-0.237406) | 0.414643 / 1.386936 (-0.972293) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004801 / 0.011353 (-0.006552) | 0.003092 / 0.011008 (-0.007917) | 0.046471 / 0.038508 (0.007963) | 0.031337 / 0.023109 (0.008228) | 0.258920 / 0.275898 (-0.016978) | 0.269842 / 0.323480 (-0.053638) | 0.003976 / 0.007986 (-0.004009) | 0.002661 / 0.004328 (-0.001668) | 0.045676 / 0.004250 (0.041426) | 0.038199 / 0.037052 (0.001146) | 0.277382 / 0.258489 (0.018893) | 0.289351 / 0.293841 (-0.004490) | 0.028452 / 0.128546 (-0.100094) | 0.009737 / 0.075646 (-0.065910) | 0.055201 / 0.419271 (-0.364071) | 0.032686 / 0.043533 (-0.010847) | 0.259617 / 0.255139 (0.004478) | 0.277163 / 0.283200 (-0.006037) | 0.017825 / 0.141683 (-0.123858) | 1.102797 / 1.452155 (-0.349357) | 1.105018 / 1.492716 (-0.387699) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094844 / 0.018006 (0.076838) | 0.290519 / 0.000490 (0.290029) | 0.000211 / 0.000200 (0.000012) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021917 / 0.037411 (-0.015494) | 0.075278 / 0.014526 (0.060753) | 0.085971 / 0.176557 (-0.090586) | 0.127072 / 0.737135 (-0.610063) | 0.088244 / 0.296338 (-0.208095) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.276704 / 0.215209 (0.061495) | 2.736960 / 2.077655 (0.659305) | 1.519634 / 1.504120 (0.015514) | 1.403026 / 1.541195 (-0.138168) | 1.418465 / 1.468490 (-0.050025) | 0.552425 / 4.584777 (-4.032352) | 0.955244 / 3.745712 (-2.790468) | 2.556563 / 5.269862 (-2.713298) | 1.705095 / 4.565676 (-2.860582) | 0.061212 / 0.424275 (-0.363063) | 0.004707 / 0.007607 (-0.002900) | 0.326284 / 0.226044 (0.100239) | 3.253911 / 2.268929 (0.984983) | 1.868649 / 55.444624 (-53.575976) | 1.598697 / 6.876477 (-5.277780) | 1.682617 / 2.142072 (-0.459455) | 0.606379 / 4.805227 (-4.198848) | 0.114126 / 6.500664 (-6.386538) | 0.038869 / 0.075469 (-0.036601) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.966354 / 1.841788 (-0.875433) | 11.575918 / 8.074308 (3.501609) | 9.816597 / 10.191392 (-0.374795) | 0.141492 / 0.680424 (-0.538932) | 0.015375 / 0.534201 (-0.518826) | 0.276027 / 0.579283 (-0.303256) | 0.118979 / 0.434364 (-0.315385) | 0.313467 / 0.540337 (-0.226870) | 0.403539 / 1.386936 (-0.983397) |\n\n</details>\n</details>\n\n\n"
] | 2024-06-03T06:24:47Z
| 2024-06-04T10:00:08Z
| 2024-06-04T09:54:23Z
|
MEMBER
| null | null | null |
Re-enable import sorting that was wrongly disabled by `flake8: noqa` directive after switching to `ruff` linter in datasets-2.10.0 PR:
- #5519
Note that after the linter switch, we wrongly replaced `flake8: noqa` with `ruff: noqa` in datasets-2.17.0 PR:
- #6619
That replacement was wrong because we kept the `isort: skip` directives although they were indeed disabled by `flake8: noqa` first and by `ruff: noqa` afterwards. See for example `__init__.py` file after the linter switch:
- We kept the `flake8: noqa` directive
https://github.com/huggingface/datasets/blob/06ae3f678651bfbb3ca7dd3274ee2f38e0e0237e/src/datasets/__init__.py#L1
- Whereas we also kept the `isort: skip` directives (that were disabled)
https://github.com/huggingface/datasets/blob/06ae3f678651bfbb3ca7dd3274ee2f38e0e0237e/src/datasets/__init__.py#L82-L84
Fix #6942.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6946/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6946/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6946.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6946",
"merged_at": "2024-06-04T09:54:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6946.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6946"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5929
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5929/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5929/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5929/events
|
https://github.com/huggingface/datasets/issues/5929
| 1,744,478,456
|
I_kwDODunzps5n-qD4
| 5,929
|
Importing PyTorch reduces multiprocessing performance for map
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/12814709?v=4",
"events_url": "https://api.github.com/users/Maxscha/events{/privacy}",
"followers_url": "https://api.github.com/users/Maxscha/followers",
"following_url": "https://api.github.com/users/Maxscha/following{/other_user}",
"gists_url": "https://api.github.com/users/Maxscha/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Maxscha",
"id": 12814709,
"login": "Maxscha",
"node_id": "MDQ6VXNlcjEyODE0NzA5",
"organizations_url": "https://api.github.com/users/Maxscha/orgs",
"received_events_url": "https://api.github.com/users/Maxscha/received_events",
"repos_url": "https://api.github.com/users/Maxscha/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Maxscha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Maxscha/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Maxscha",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi! The times match when I run this code locally or on Colab.\r\n\r\nAlso, we use `multiprocess`, not `multiprocessing`, for parallelization, and torch's `__init__.py` (executed on `import torch` ) slightly modifies the latter.",
"Hey Mariosasko,\r\n\r\nThanks for looking into it. We further did some investigations after your comment and figured out it's only affecting some hardware/software configurations with the `pytorch` installation of `conda-forge`. Based on this we found the following issue in PyTorch: https://github.com/pytorch/pytorch/issues/102269 with a quick fix for now.\r\n\r\nSince it seems to be a deeper issue with forking processes, the difference between`multiprocess` and `multiprocessing` didn't make a difference.\r\n\r\nClosing this, since the issue comes from `pytorch` not `dataset`. \r\n"
] | 2023-06-06T19:42:25Z
| 2023-06-16T13:09:12Z
| 2023-06-16T13:09:12Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I noticed that the performance of my dataset preprocessing with `map(...,num_proc=32)` decreases when PyTorch is imported.
### Steps to reproduce the bug
I created two example scripts to reproduce this behavior:
```
import datasets
datasets.disable_caching()
from datasets import Dataset
import time
PROC=32
if __name__ == "__main__":
dataset = [True] * 10000000
dataset = Dataset.from_dict({'train': dataset})
start = time.time()
dataset.map(lambda x: x, num_proc=PROC)
end = time.time()
print(end - start)
```
Takes around 4 seconds on my machine.
While the same code, but with an `import torch`:
```
import datasets
datasets.disable_caching()
from datasets import Dataset
import time
import torch
PROC=32
if __name__ == "__main__":
dataset = [True] * 10000000
dataset = Dataset.from_dict({'train': dataset})
start = time.time()
dataset.map(lambda x: x, num_proc=PROC)
end = time.time()
print(end - start)
```
takes around 22 seconds.
### Expected behavior
I would expect that the import of torch to not have such a significant effect on the performance of map using multiprocessing.
### Environment info
- `datasets` version: 2.12.0
- Platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.35
- Python version: 3.11.3
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.2
- torch: 2.0.1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/12814709?v=4",
"events_url": "https://api.github.com/users/Maxscha/events{/privacy}",
"followers_url": "https://api.github.com/users/Maxscha/followers",
"following_url": "https://api.github.com/users/Maxscha/following{/other_user}",
"gists_url": "https://api.github.com/users/Maxscha/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Maxscha",
"id": 12814709,
"login": "Maxscha",
"node_id": "MDQ6VXNlcjEyODE0NzA5",
"organizations_url": "https://api.github.com/users/Maxscha/orgs",
"received_events_url": "https://api.github.com/users/Maxscha/received_events",
"repos_url": "https://api.github.com/users/Maxscha/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Maxscha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Maxscha/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Maxscha",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5929/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5929/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5251
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5251/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5251/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5251/events
|
https://github.com/huggingface/datasets/issues/5251
| 1,451,761,321
|
I_kwDODunzps5WiB6p
| 5,251
|
Docs are not generated after latest release
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks",
"id": 4296013012,
"name": "maintenance",
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance"
}
] |
closed
| false
| null |
[] | null |
[
"After a discussion with @mishig25:\r\n- He said that this action should be triggered if we call our release branch according to the regex `v*-release`, as transformers does\r\n- I said that our procedure is different: our release branch is *temporary* and it is deleted just after the release PR is merged to main\r\n - Indeed the release tag is not yet created when we make the release PR (not event when this is merged to main), but when we make the Release itself.\r\n\r\nI was thinking that maybe we could change the triggering event: use `release` instead of `push`.\r\n\r\nWhat do you think, @huggingface/datasets?",
"Why is it an issue if our branch is temporary ?",
"He says not; but the branch has no tag yet; does the doc building require the tag? Or just the version number in `__init__.py` or setup.py?",
"It uses `module.__version__` (i.e. the one defined in `__init__.py`) - no need to have a tag\r\n\r\nhttps://github.com/huggingface/doc-builder/blob/81575cf081964c30ea5fd39450f4820db963f18e/src/doc_builder/commands/build.py#L69",
"Thanks, @lhoestq.\r\n\r\n@mishig25 has manually forced the generation of the docs, that are live for 2.7.0 version: https://huggingface.co/docs/datasets/v2.7.0/en/index ",
"Cool ! this can be closed then ?",
"I was waiting for #5250 to be merged to close this.",
"just to confirm, is there anything I need to do from my side ? Or is everything good here ?"
] | 2022-11-16T14:59:31Z
| 2022-11-22T16:27:50Z
| 2022-11-22T16:27:50Z
|
MEMBER
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
After the latest `datasets` release version 0.7.0, the docs were not generated.
As we have changed the release procedure (so that now we do not push directly to main branch), maybe we should also change the corresponding GitHub action:
https://github.com/huggingface/datasets/blob/edf1902f954c5568daadebcd8754bdad44b02a85/.github/workflows/build_documentation.yml#L3-L8
Related to:
- #5250
CC: @mishig25
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5251/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5251/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5691
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5691/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5691/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5691/events
|
https://github.com/huggingface/datasets/pull/5691
| 1,649,737,526
|
PR_kwDODunzps5NX08d
| 5,691
|
[docs] Compress data files
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stevhliu",
"id": 59462357,
"login": "stevhliu",
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stevhliu",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"[Confirmed](https://huggingface.slack.com/archives/C02EMARJ65P/p1680541667004199) with the Hub team the file size limit for the Hugging Face Hub is 10MB :)",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006789 / 0.011353 (-0.004564) | 0.004935 / 0.011008 (-0.006073) | 0.096796 / 0.038508 (0.058288) | 0.032485 / 0.023109 (0.009376) | 0.335342 / 0.275898 (0.059444) | 0.354999 / 0.323480 (0.031519) | 0.005467 / 0.007986 (-0.002519) | 0.005267 / 0.004328 (0.000939) | 0.073988 / 0.004250 (0.069737) | 0.044402 / 0.037052 (0.007350) | 0.331156 / 0.258489 (0.072666) | 0.363595 / 0.293841 (0.069754) | 0.035301 / 0.128546 (-0.093245) | 0.012141 / 0.075646 (-0.063505) | 0.333164 / 0.419271 (-0.086107) | 0.048818 / 0.043533 (0.005286) | 0.331458 / 0.255139 (0.076319) | 0.343567 / 0.283200 (0.060367) | 0.094963 / 0.141683 (-0.046720) | 1.444383 / 1.452155 (-0.007772) | 1.520093 / 1.492716 (0.027377) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212311 / 0.018006 (0.194305) | 0.436413 / 0.000490 (0.435923) | 0.000333 / 0.000200 (0.000133) | 0.000057 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026670 / 0.037411 (-0.010742) | 0.105774 / 0.014526 (0.091248) | 0.115796 / 0.176557 (-0.060760) | 0.176504 / 0.737135 (-0.560631) | 0.121883 / 0.296338 (-0.174456) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.400783 / 0.215209 (0.185574) | 4.006608 / 2.077655 (1.928953) | 1.817659 / 1.504120 (0.313539) | 1.619777 / 1.541195 (0.078582) | 1.684247 / 1.468490 (0.215757) | 0.701116 / 4.584777 (-3.883661) | 3.684056 / 3.745712 (-0.061656) | 2.065258 / 5.269862 (-3.204603) | 1.425460 / 4.565676 (-3.140217) | 0.084519 / 0.424275 (-0.339757) | 0.011949 / 0.007607 (0.004342) | 0.496793 / 0.226044 (0.270749) | 4.978864 / 2.268929 (2.709935) | 2.303388 / 55.444624 (-53.141237) | 1.978341 / 6.876477 (-4.898135) | 2.055744 / 2.142072 (-0.086329) | 0.832022 / 4.805227 (-3.973206) | 0.164715 / 6.500664 (-6.335949) | 0.062701 / 0.075469 (-0.012768) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.178723 / 1.841788 (-0.663065) | 14.583986 / 8.074308 (6.509678) | 14.189402 / 10.191392 (3.998010) | 0.183867 / 0.680424 (-0.496557) | 0.017565 / 0.534201 (-0.516636) | 0.421345 / 0.579283 (-0.157938) | 0.420235 / 0.434364 (-0.014129) | 0.496758 / 0.540337 (-0.043580) | 0.591558 / 1.386936 (-0.795378) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007019 / 0.011353 (-0.004334) | 0.004996 / 0.011008 (-0.006012) | 0.073345 / 0.038508 (0.034836) | 0.033077 / 0.023109 (0.009968) | 0.335954 / 0.275898 (0.060056) | 0.372616 / 0.323480 (0.049136) | 0.005678 / 0.007986 (-0.002308) | 0.003906 / 0.004328 (-0.000423) | 0.072841 / 0.004250 (0.068591) | 0.046829 / 0.037052 (0.009777) | 0.335177 / 0.258489 (0.076688) | 0.382862 / 0.293841 (0.089021) | 0.038406 / 0.128546 (-0.090141) | 0.012110 / 0.075646 (-0.063536) | 0.085796 / 0.419271 (-0.333476) | 0.049896 / 0.043533 (0.006363) | 0.338232 / 0.255139 (0.083093) | 0.361054 / 0.283200 (0.077855) | 0.103171 / 0.141683 (-0.038512) | 1.556692 / 1.452155 (0.104538) | 1.540023 / 1.492716 (0.047306) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223705 / 0.018006 (0.205699) | 0.438771 / 0.000490 (0.438282) | 0.002838 / 0.000200 (0.002639) | 0.000081 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028423 / 0.037411 (-0.008988) | 0.110560 / 0.014526 (0.096035) | 0.121629 / 0.176557 (-0.054928) | 0.173638 / 0.737135 (-0.563498) | 0.127062 / 0.296338 (-0.169277) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.425806 / 0.215209 (0.210597) | 4.251051 / 2.077655 (2.173397) | 2.059735 / 1.504120 (0.555615) | 1.864886 / 1.541195 (0.323692) | 1.941553 / 1.468490 (0.473063) | 0.700084 / 4.584777 (-3.884693) | 3.753150 / 3.745712 (0.007438) | 3.218606 / 5.269862 (-2.051256) | 1.439648 / 4.565676 (-3.126028) | 0.085239 / 0.424275 (-0.339037) | 0.012026 / 0.007607 (0.004419) | 0.521564 / 0.226044 (0.295520) | 5.217902 / 2.268929 (2.948973) | 2.557831 / 55.444624 (-52.886793) | 2.240223 / 6.876477 (-4.636254) | 2.364664 / 2.142072 (0.222591) | 0.825884 / 4.805227 (-3.979343) | 0.167800 / 6.500664 (-6.332864) | 0.063552 / 0.075469 (-0.011917) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.255532 / 1.841788 (-0.586256) | 14.747783 / 8.074308 (6.673475) | 14.352263 / 10.191392 (4.160871) | 0.143659 / 0.680424 (-0.536765) | 0.017517 / 0.534201 (-0.516684) | 0.419863 / 0.579283 (-0.159421) | 0.416674 / 0.434364 (-0.017690) | 0.485694 / 0.540337 (-0.054643) | 0.584810 / 1.386936 (-0.802126) |\n\n</details>\n</details>\n\n\n"
] | 2023-03-31T17:17:26Z
| 2023-04-19T13:37:32Z
| 2023-04-19T07:25:58Z
|
MEMBER
| null | null | null |
This PR addresses the comments in #5687 about compressing text file extensions before uploading to the Hub. Also clarified what "too large" means based on the GitLFS [docs](https://docs.github.com/en/repositories/working-with-files/managing-large-files/about-git-large-file-storage).
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5691/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5691/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5691.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5691",
"merged_at": "2023-04-19T07:25:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5691.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5691"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6171
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6171/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6171/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6171/events
|
https://github.com/huggingface/datasets/pull/6171
| 1,862,922,767
|
PR_kwDODunzps5Yk4AS
| 6,171
|
Fix typo in about_mapstyle_vs_iterable.mdx
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6171). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009315 / 0.011353 (-0.002038) | 0.004931 / 0.011008 (-0.006077) | 0.100534 / 0.038508 (0.062026) | 0.089270 / 0.023109 (0.066161) | 0.394995 / 0.275898 (0.119097) | 0.440244 / 0.323480 (0.116764) | 0.006026 / 0.007986 (-0.001959) | 0.004252 / 0.004328 (-0.000077) | 0.078828 / 0.004250 (0.074577) | 0.066770 / 0.037052 (0.029718) | 0.411152 / 0.258489 (0.152663) | 0.445616 / 0.293841 (0.151775) | 0.048344 / 0.128546 (-0.080203) | 0.013700 / 0.075646 (-0.061946) | 0.361205 / 0.419271 (-0.058066) | 0.072085 / 0.043533 (0.028552) | 0.399173 / 0.255139 (0.144034) | 0.439334 / 0.283200 (0.156134) | 0.035815 / 0.141683 (-0.105868) | 1.779023 / 1.452155 (0.326868) | 1.865099 / 1.492716 (0.372383) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.275978 / 0.018006 (0.257972) | 0.588850 / 0.000490 (0.588360) | 0.004953 / 0.000200 (0.004754) | 0.000109 / 0.000054 (0.000055) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031329 / 0.037411 (-0.006082) | 0.095435 / 0.014526 (0.080910) | 0.111182 / 0.176557 (-0.065375) | 0.177692 / 0.737135 (-0.559444) | 0.113345 / 0.296338 (-0.182993) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.577882 / 0.215209 (0.362673) | 5.865872 / 2.077655 (3.788217) | 2.664218 / 1.504120 (1.160098) | 2.383354 / 1.541195 (0.842159) | 2.336821 / 1.468490 (0.868331) | 0.834585 / 4.584777 (-3.750192) | 5.418720 / 3.745712 (1.673008) | 4.551790 / 5.269862 (-0.718072) | 2.921874 / 4.565676 (-1.643803) | 0.095738 / 0.424275 (-0.328537) | 0.009625 / 0.007607 (0.002018) | 0.688317 / 0.226044 (0.462273) | 6.831826 / 2.268929 (4.562897) | 3.482607 / 55.444624 (-51.962017) | 2.633482 / 6.876477 (-4.242995) | 2.878786 / 2.142072 (0.736714) | 0.971615 / 4.805227 (-3.833613) | 0.208661 / 6.500664 (-6.292003) | 0.080271 / 0.075469 (0.004802) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.661193 / 1.841788 (-0.180594) | 24.223041 / 8.074308 (16.148733) | 21.621791 / 10.191392 (11.430399) | 0.243809 / 0.680424 (-0.436614) | 0.031630 / 0.534201 (-0.502571) | 0.501408 / 0.579283 (-0.077875) | 0.600002 / 0.434364 (0.165638) | 0.572066 / 0.540337 (0.031728) | 0.791992 / 1.386936 (-0.594944) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009410 / 0.011353 (-0.001943) | 0.005255 / 0.011008 (-0.005753) | 0.079202 / 0.038508 (0.040693) | 0.078973 / 0.023109 (0.055863) | 0.557416 / 0.275898 (0.281518) | 0.560417 / 0.323480 (0.236937) | 0.007066 / 0.007986 (-0.000920) | 0.004560 / 0.004328 (0.000232) | 0.080359 / 0.004250 (0.076109) | 0.060071 / 0.037052 (0.023019) | 0.538441 / 0.258489 (0.279952) | 0.592486 / 0.293841 (0.298645) | 0.053221 / 0.128546 (-0.075325) | 0.014056 / 0.075646 (-0.061591) | 0.094084 / 0.419271 (-0.325188) | 0.066721 / 0.043533 (0.023188) | 0.521873 / 0.255139 (0.266734) | 0.579637 / 0.283200 (0.296437) | 0.041476 / 0.141683 (-0.100206) | 1.829681 / 1.452155 (0.377527) | 1.948418 / 1.492716 (0.455702) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.347594 / 0.018006 (0.329588) | 0.606906 / 0.000490 (0.606417) | 0.035413 / 0.000200 (0.035213) | 0.000371 / 0.000054 (0.000317) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031987 / 0.037411 (-0.005425) | 0.096985 / 0.014526 (0.082459) | 0.109275 / 0.176557 (-0.067282) | 0.175340 / 0.737135 (-0.561795) | 0.110763 / 0.296338 (-0.185575) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.634823 / 0.215209 (0.419614) | 6.527172 / 2.077655 (4.449517) | 3.135709 / 1.504120 (1.631589) | 2.634357 / 1.541195 (1.093162) | 2.670583 / 1.468490 (1.202093) | 0.888686 / 4.584777 (-3.696091) | 5.382289 / 3.745712 (1.636577) | 4.701189 / 5.269862 (-0.568673) | 3.161290 / 4.565676 (-1.404386) | 0.112414 / 0.424275 (-0.311861) | 0.009443 / 0.007607 (0.001836) | 0.774703 / 0.226044 (0.548658) | 7.905334 / 2.268929 (5.636405) | 3.689548 / 55.444624 (-51.755076) | 3.087263 / 6.876477 (-3.789214) | 3.366568 / 2.142072 (1.224496) | 1.185951 / 4.805227 (-3.619277) | 0.248638 / 6.500664 (-6.252026) | 0.104598 / 0.075469 (0.029129) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.820667 / 1.841788 (-0.021120) | 24.536703 / 8.074308 (16.462395) | 23.083964 / 10.191392 (12.892572) | 0.252897 / 0.680424 (-0.427527) | 0.032954 / 0.534201 (-0.501247) | 0.482467 / 0.579283 (-0.096816) | 0.602247 / 0.434364 (0.167883) | 0.600563 / 0.540337 (0.060225) | 0.824013 / 1.386936 (-0.562923) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009242 / 0.011353 (-0.002111) | 0.005244 / 0.011008 (-0.005764) | 0.112678 / 0.038508 (0.074170) | 0.089176 / 0.023109 (0.066067) | 0.405823 / 0.275898 (0.129925) | 0.465703 / 0.323480 (0.142223) | 0.005227 / 0.007986 (-0.002758) | 0.004296 / 0.004328 (-0.000032) | 0.082961 / 0.004250 (0.078711) | 0.063144 / 0.037052 (0.026092) | 0.422369 / 0.258489 (0.163880) | 0.478185 / 0.293841 (0.184344) | 0.049770 / 0.128546 (-0.078776) | 0.016561 / 0.075646 (-0.059086) | 0.380172 / 0.419271 (-0.039100) | 0.068698 / 0.043533 (0.025165) | 0.397773 / 0.255139 (0.142634) | 0.461284 / 0.283200 (0.178084) | 0.036907 / 0.141683 (-0.104775) | 1.828017 / 1.452155 (0.375862) | 2.028385 / 1.492716 (0.535669) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.291245 / 0.018006 (0.273239) | 0.605519 / 0.000490 (0.605030) | 0.003790 / 0.000200 (0.003590) | 0.000094 / 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029269 / 0.037411 (-0.008142) | 0.087014 / 0.014526 (0.072488) | 0.116984 / 0.176557 (-0.059573) | 0.170644 / 0.737135 (-0.566491) | 0.109011 / 0.296338 (-0.187328) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.603045 / 0.215209 (0.387836) | 6.125308 / 2.077655 (4.047653) | 2.637127 / 1.504120 (1.133007) | 2.468636 / 1.541195 (0.927441) | 2.383773 / 1.468490 (0.915283) | 0.838139 / 4.584777 (-3.746638) | 5.355777 / 3.745712 (1.610065) | 4.753015 / 5.269862 (-0.516846) | 3.097486 / 4.565676 (-1.468191) | 0.094749 / 0.424275 (-0.329526) | 0.009040 / 0.007607 (0.001433) | 0.699987 / 0.226044 (0.473942) | 7.111671 / 2.268929 (4.842742) | 3.297798 / 55.444624 (-52.146827) | 2.614578 / 6.876477 (-4.261898) | 2.927717 / 2.142072 (0.785645) | 1.037292 / 4.805227 (-3.767935) | 0.218025 / 6.500664 (-6.282639) | 0.086306 / 0.075469 (0.010836) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.645146 / 1.841788 (-0.196642) | 24.191875 / 8.074308 (16.117567) | 21.844371 / 10.191392 (11.652979) | 0.245369 / 0.680424 (-0.435055) | 0.031776 / 0.534201 (-0.502425) | 0.465634 / 0.579283 (-0.113649) | 0.565498 / 0.434364 (0.131134) | 0.497409 / 0.540337 (-0.042929) | 0.748048 / 1.386936 (-0.638889) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009239 / 0.011353 (-0.002114) | 0.005345 / 0.011008 (-0.005663) | 0.072732 / 0.038508 (0.034224) | 0.099880 / 0.023109 (0.076770) | 0.466933 / 0.275898 (0.191035) | 0.471730 / 0.323480 (0.148250) | 0.006164 / 0.007986 (-0.001821) | 0.004486 / 0.004328 (0.000158) | 0.075475 / 0.004250 (0.071224) | 0.068291 / 0.037052 (0.031238) | 0.465925 / 0.258489 (0.207436) | 0.469198 / 0.293841 (0.175357) | 0.047304 / 0.128546 (-0.081242) | 0.013368 / 0.075646 (-0.062278) | 0.083563 / 0.419271 (-0.335708) | 0.063204 / 0.043533 (0.019671) | 0.457422 / 0.255139 (0.202283) | 0.478793 / 0.283200 (0.195593) | 0.036120 / 0.141683 (-0.105563) | 1.841209 / 1.452155 (0.389054) | 1.955984 / 1.492716 (0.463267) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.369160 / 0.018006 (0.351154) | 0.607140 / 0.000490 (0.606650) | 0.047253 / 0.000200 (0.047054) | 0.000475 / 0.000054 (0.000420) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.040226 / 0.037411 (0.002815) | 0.107361 / 0.014526 (0.092835) | 0.122424 / 0.176557 (-0.054133) | 0.186447 / 0.737135 (-0.550688) | 0.127060 / 0.296338 (-0.169279) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.706737 / 0.215209 (0.491528) | 6.791287 / 2.077655 (4.713632) | 3.194471 / 1.504120 (1.690352) | 2.928145 / 1.541195 (1.386950) | 2.829078 / 1.468490 (1.360588) | 0.929797 / 4.584777 (-3.654980) | 5.484638 / 3.745712 (1.738926) | 4.841570 / 5.269862 (-0.428292) | 2.995247 / 4.565676 (-1.570430) | 0.104709 / 0.424275 (-0.319566) | 0.009543 / 0.007607 (0.001936) | 0.817605 / 0.226044 (0.591561) | 7.879234 / 2.268929 (5.610305) | 3.838073 / 55.444624 (-51.606551) | 3.189728 / 6.876477 (-3.686749) | 3.483775 / 2.142072 (1.341703) | 1.092823 / 4.805227 (-3.712404) | 0.227660 / 6.500664 (-6.273004) | 0.082452 / 0.075469 (0.006983) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.750413 / 1.841788 (-0.091374) | 27.078082 / 8.074308 (19.003774) | 23.968038 / 10.191392 (13.776646) | 0.248065 / 0.680424 (-0.432359) | 0.029961 / 0.534201 (-0.504240) | 0.508630 / 0.579283 (-0.070653) | 0.608707 / 0.434364 (0.174343) | 0.611062 / 0.540337 (0.070725) | 0.830797 / 1.386936 (-0.556139) |\n\n</details>\n</details>\n\n\n"
] | 2023-08-23T09:21:11Z
| 2023-08-23T09:32:59Z
| 2023-08-23T09:21:19Z
|
MEMBER
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6171/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6171/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6171.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6171",
"merged_at": "2023-08-23T09:21:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6171.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6171"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6495
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6495/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6495/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6495/events
|
https://github.com/huggingface/datasets/issues/6495
| 2,039,708,529
|
I_kwDODunzps55k3tx
| 6,495
|
Newline characters don't behave as expected when calling dataset.info
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/32300890?v=4",
"events_url": "https://api.github.com/users/gerald-wrona/events{/privacy}",
"followers_url": "https://api.github.com/users/gerald-wrona/followers",
"following_url": "https://api.github.com/users/gerald-wrona/following{/other_user}",
"gists_url": "https://api.github.com/users/gerald-wrona/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gerald-wrona",
"id": 32300890,
"login": "gerald-wrona",
"node_id": "MDQ6VXNlcjMyMzAwODkw",
"organizations_url": "https://api.github.com/users/gerald-wrona/orgs",
"received_events_url": "https://api.github.com/users/gerald-wrona/received_events",
"repos_url": "https://api.github.com/users/gerald-wrona/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gerald-wrona/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gerald-wrona/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gerald-wrona",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2023-12-12T23:07:51Z
| 2023-12-13T13:24:22Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### System Info
- `transformers` version: 4.32.1
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.11.5
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.2
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.1+cpu (False)
- Tensorflow version (GPU?): 2.15.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@marios
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
[Source](https://huggingface.co/docs/datasets/v2.2.1/en/access)
```
from datasets import load_dataset
dataset = load_dataset('glue', 'mrpc', split='train')
dataset.info
```
DatasetInfo(description='GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\n', citation='@inproceedings{dolan2005automatically,\n title={Automatically constructing a corpus of sentential paraphrases},\n author={Dolan, William B and Brockett, Chris},\n booktitle={Proceedings of the Third International Workshop on Paraphrasing (IWP2005)},\n year={2005}\n}\n@inproceedings{wang2019glue,\n title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},\n author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},\n note={In the Proceedings of ICLR.},\n year={2019}\n}\n', homepage='https://www.microsoft.com/en-us/download/details.aspx?id=52398', license='', features={'sentence1': Value(dtype='string', id=None), 'sentence2': Value(dtype='string', id=None), 'label': ClassLabel(names=['not_equivalent', 'equivalent'], id=None), 'idx': Value(dtype='int32', id=None)}, post_processed=None, supervised_keys=None, task_templates=None, builder_name='glue', dataset_name=None, config_name='mrpc', version=1.0.0, splits={'train': SplitInfo(name='train', num_bytes=943843, num_examples=3668, shard_lengths=None, dataset_name='glue'), 'validation': SplitInfo(name='validation', num_bytes=105879, num_examples=408, shard_lengths=None, dataset_name='glue'), 'test': SplitInfo(name='test', num_bytes=442410, num_examples=1725, shard_lengths=None, dataset_name='glue')}, download_checksums={'https://dl.fbaipublicfiles.com/glue/data/mrpc_dev_ids.tsv': {'num_bytes': 6222, 'checksum': None}, 'https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_train.txt': {'num_bytes': 1047044, 'checksum': None}, 'https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_test.txt': {'num_bytes': 441275, 'checksum': None}}, download_size=1494541, post_processing_size=None, dataset_size=1492132, size_in_bytes=2986673)
### Expected behavior
```
from datasets import load_dataset
dataset = load_dataset('glue', 'mrpc', split='train')
dataset.info
```
DatasetInfo(
description='GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\n',
citation='@inproceedings{dolan2005automatically,\n title={Automatically constructing a corpus of sentential paraphrases},\n author={Dolan, William B and Brockett, Chris},\n booktitle={Proceedings of the Third International Workshop on Paraphrasing (IWP2005)},\n year={2005}\n}\n@inproceedings{wang2019glue,\n title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},\n author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},\n note={In the Proceedings of ICLR.},\n year={2019}\n}\n', homepage='https://www.microsoft.com/en-us/download/details.aspx?id=52398',
license='',
features={'sentence1': Value(dtype='string', id=None), 'sentence2': Value(dtype='string', id=None), 'label': ClassLabel(num_classes=2, names=['not_equivalent', 'equivalent'], names_file=None, id=None), 'idx': Value(dtype='int32', id=None)}, post_processed=None, supervised_keys=None, builder_name='glue', config_name='mrpc', version=1.0.0, splits={'train': SplitInfo(name='train', num_bytes=943851, num_examples=3668, dataset_name='glue'), 'validation': SplitInfo(name='validation', num_bytes=105887, num_examples=408, dataset_name='glue'), 'test': SplitInfo(name='test', num_bytes=442418, num_examples=1725, dataset_name='glue')},
download_checksums={'https://dl.fbaipublicfiles.com/glue/data/mrpc_dev_ids.tsv': {'num_bytes': 6222, 'checksum': '971d7767d81b997fd9060ade0ec23c4fc31cbb226a55d1bd4a1bac474eb81dc7'}, 'https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_train.txt': {'num_bytes': 1047044, 'checksum': '60a9b09084528f0673eedee2b69cb941920f0b8cd0eeccefc464a98768457f89'}, 'https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_test.txt': {'num_bytes': 441275, 'checksum': 'a04e271090879aaba6423d65b94950c089298587d9c084bf9cd7439bd785f784'}},
download_size=1494541,
post_processing_size=None,
dataset_size=1492156,
size_in_bytes=2986697
)
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6495/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6495/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6758
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6758/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6758/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6758/events
|
https://github.com/huggingface/datasets/issues/6758
| 2,208,494,302
|
I_kwDODunzps6DovLe
| 6,758
|
Passing `sample_by` to `load_dataset` when loading text data does not work
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/823693?v=4",
"events_url": "https://api.github.com/users/ntoxeg/events{/privacy}",
"followers_url": "https://api.github.com/users/ntoxeg/followers",
"following_url": "https://api.github.com/users/ntoxeg/following{/other_user}",
"gists_url": "https://api.github.com/users/ntoxeg/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ntoxeg",
"id": 823693,
"login": "ntoxeg",
"node_id": "MDQ6VXNlcjgyMzY5Mw==",
"organizations_url": "https://api.github.com/users/ntoxeg/orgs",
"received_events_url": "https://api.github.com/users/ntoxeg/received_events",
"repos_url": "https://api.github.com/users/ntoxeg/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ntoxeg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ntoxeg/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ntoxeg",
"user_view_type": "public"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
] | null |
[
"Thanks for reporting! We are working on a fix."
] | 2024-03-26T14:55:33Z
| 2024-04-09T11:27:59Z
| 2024-04-09T11:27:59Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I have a dataset that consists of a bunch of text files, each representing an example. There is an undocumented `sample_by` argument for the `TextConfig` class that is used by `Text` to decide whether to split files into lines, paragraphs or take them whole. Passing `sample_by=“document”` to `load_dataset` results in files getting split into lines regardless. I have edited `src/datasets/packaged_modules/text/text.py` for myself to switch the default and it works fine.
As a side note, the `if-else` for `sample_by` will silently load an empty dataset if someone makes a typo in the argument, which is not ideal.
### Steps to reproduce the bug
1. Prepare data as a bunch of files in a directory.
2. Load that data via `load_dataset(“text”, data_files=<data_dir>/<files_glob>, …, sample_by=“document”)`.
3. Inspect the resultant dataset — every item should have the form of `{“text”: <a line from a file>}`.
### Expected behavior
`load_dataset(“text”, data_files=<data_dir>/<files_glob>, …, sample_by=“document”)` should result in a dataset with items of the form `{“text”: <one document>}`.
### Environment info
- `datasets` version: 2.18.0
- Platform: Linux-5.15.0-1046-nvidia-x86_64-with-glibc2.35
- Python version: 3.11.8
- `huggingface_hub` version: 0.21.4
- PyArrow version: 15.0.2
- Pandas version: 2.2.1
- `fsspec` version: 2024.2.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6758/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6758/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/7443
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7443/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7443/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7443/events
|
https://github.com/huggingface/datasets/issues/7443
| 2,908,585,656
|
I_kwDODunzps6tXX64
| 7,443
|
index error when num_shards > len(dataset)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17934496?v=4",
"events_url": "https://api.github.com/users/eminorhan/events{/privacy}",
"followers_url": "https://api.github.com/users/eminorhan/followers",
"following_url": "https://api.github.com/users/eminorhan/following{/other_user}",
"gists_url": "https://api.github.com/users/eminorhan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/eminorhan",
"id": 17934496,
"login": "eminorhan",
"node_id": "MDQ6VXNlcjE3OTM0NDk2",
"organizations_url": "https://api.github.com/users/eminorhan/orgs",
"received_events_url": "https://api.github.com/users/eminorhan/received_events",
"repos_url": "https://api.github.com/users/eminorhan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/eminorhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eminorhan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/eminorhan",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Actually, looking at the code a bit more carefully, maybe an even better solution is to explicitly set `num_shards=len(self)` somewhere inside both `push_to_hub()` and `save_to_disk()` when these functions are invoked with `num_shards > len(dataset)`."
] | 2025-03-10T22:40:59Z
| 2025-03-10T23:43:08Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
In `ds.push_to_hub()` and `ds.save_to_disk()`, `num_shards` must be smaller than or equal to the number of rows in the dataset, but currently this is not checked anywhere inside these functions. Attempting to invoke these functions with `num_shards > len(dataset)` should raise an informative `ValueError`.
I frequently work with datasets with a small number of rows where each row is pretty large, so I often encounter this issue, where the function runs until the shard index in `ds.shard(num_shards, indx)` goes out of bounds. Ideally, a `ValueError` should be raised before reaching this point (i.e. as soon as `ds.push_to_hub()` or `ds.save_to_disk()` is invoked with `num_shards > len(dataset)`).
It seems that adding something like:
```python
if len(self) < num_shards:
raise ValueError(f"num_shards ({num_shards}) must be smaller than or equal to the number of rows in the dataset ({len(self)}). Please either reduce num_shards or increase max_shard_size to make sure num_shards <= len(dataset).")
```
to the beginning of the definition of the `ds.shard()` function [here](https://github.com/huggingface/datasets/blob/f693f4e93aabafa878470c80fd42ddb10ec550d6/src/datasets/arrow_dataset.py#L4728) would deal with this issue for both `ds.push_to_hub()` and `ds.save_to_disk()`, but I'm not exactly sure if this is the best place to raise the `ValueError` (it seems that a more correct way to do it would be to write separate checks for `ds.push_to_hub()` and `ds.save_to_disk()`). I'd be happy to submit a PR if you think something along these lines would be acceptable.
| null |
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7443/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7443/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6314
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6314/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6314/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6314/events
|
https://github.com/huggingface/datasets/pull/6314
| 1,951,684,763
|
PR_kwDODunzps5dPo25
| 6,314
|
Support creating new branch in push_to_hub
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1000442?v=4",
"events_url": "https://api.github.com/users/jmif/events{/privacy}",
"followers_url": "https://api.github.com/users/jmif/followers",
"following_url": "https://api.github.com/users/jmif/following{/other_user}",
"gists_url": "https://api.github.com/users/jmif/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jmif",
"id": 1000442,
"login": "jmif",
"node_id": "MDQ6VXNlcjEwMDA0NDI=",
"organizations_url": "https://api.github.com/users/jmif/orgs",
"received_events_url": "https://api.github.com/users/jmif/received_events",
"repos_url": "https://api.github.com/users/jmif/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jmif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmif/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jmif",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2023-10-19T09:12:39Z
| 2023-10-19T09:20:06Z
| 2023-10-19T09:19:48Z
|
NONE
| null | null | null |
This adds support for creating a new branch when pushing a dataset to the hub. Tested both methods locally and branches are created.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1000442?v=4",
"events_url": "https://api.github.com/users/jmif/events{/privacy}",
"followers_url": "https://api.github.com/users/jmif/followers",
"following_url": "https://api.github.com/users/jmif/following{/other_user}",
"gists_url": "https://api.github.com/users/jmif/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jmif",
"id": 1000442,
"login": "jmif",
"node_id": "MDQ6VXNlcjEwMDA0NDI=",
"organizations_url": "https://api.github.com/users/jmif/orgs",
"received_events_url": "https://api.github.com/users/jmif/received_events",
"repos_url": "https://api.github.com/users/jmif/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jmif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmif/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jmif",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6314/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6314/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6314.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6314",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6314.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6314"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7295
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7295/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7295/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7295/events
|
https://github.com/huggingface/datasets/issues/7295
| 2,672,003,384
|
I_kwDODunzps6fQ4k4
| 7,295
|
[BUG]: Streaming from S3 triggers `unexpected keyword argument 'requote_redirect_url'`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/27340033?v=4",
"events_url": "https://api.github.com/users/casper-hansen/events{/privacy}",
"followers_url": "https://api.github.com/users/casper-hansen/followers",
"following_url": "https://api.github.com/users/casper-hansen/following{/other_user}",
"gists_url": "https://api.github.com/users/casper-hansen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/casper-hansen",
"id": 27340033,
"login": "casper-hansen",
"node_id": "MDQ6VXNlcjI3MzQwMDMz",
"organizations_url": "https://api.github.com/users/casper-hansen/orgs",
"received_events_url": "https://api.github.com/users/casper-hansen/received_events",
"repos_url": "https://api.github.com/users/casper-hansen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/casper-hansen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/casper-hansen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/casper-hansen",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2024-11-19T12:23:36Z
| 2024-11-19T13:01:53Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Note that this bug is only triggered when `streaming=True`. #5459 introduced always calling fsspec with `client_kwargs={"requote_redirect_url": False}`, which seems to have incompatibility issues even in the newest versions.
Analysis of what's happening:
1. `datasets` passes the `client_kwargs` through `fsspec`
2. `fsspec` passes the `client_kwargs` through `s3fs`
3. `s3fs` passes the `client_kwargs` to `aiobotocore` which uses `aiohttp`
```
s3creator = self.session.create_client(
"s3", config=conf, **init_kwargs, **client_kwargs
)
```
4. The `session` tries to create an `aiohttp` session but the `**kwargs` are not just kept as unfolded `**kwargs` but passed in as individual variables (`requote_redirect_url` and `trust_env`).
Error:
```
Traceback (most recent call last):
File "/Users/cxrh/Documents/GitHub/nlp_foundation/nlp_train/test.py", line 14, in <module>
batch = next(iter(ds))
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 1353, in __iter__
for key, example in ex_iterable:
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 255, in __iter__
for key, pa_table in self.generate_tables_fn(**self.kwargs):
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/packaged_modules/json/json.py", line 78, in _generate_tables
for file_idx, file in enumerate(itertools.chain.from_iterable(files)):
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py", line 840, in __iter__
yield from self.generator(*self.args, **self.kwargs)
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py", line 921, in _iter_from_urlpaths
elif xisdir(urlpath, download_config=download_config):
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py", line 305, in xisdir
return fs.isdir(inner_path)
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/fsspec/spec.py", line 721, in isdir
return self.info(path)["type"] == "directory"
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/fsspec/archive.py", line 38, in info
self._get_dirs()
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/filesystems/compression.py", line 64, in _get_dirs
f = {**self.file.fs.info(self.file.path), "name": self.uncompressed_name}
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/fsspec/asyn.py", line 118, in wrapper
return sync(self.loop, func, *args, **kwargs)
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/fsspec/asyn.py", line 103, in sync
raise return_result
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/fsspec/asyn.py", line 56, in _runner
result[0] = await coro
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/s3fs/core.py", line 1302, in _info
out = await self._call_s3(
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/s3fs/core.py", line 341, in _call_s3
await self.set_session()
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/s3fs/core.py", line 524, in set_session
s3creator = self.session.create_client(
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/aiobotocore/session.py", line 114, in create_client
return ClientCreatorContext(self._create_client(*args, **kwargs))
TypeError: AioSession._create_client() got an unexpected keyword argument 'requote_redirect_url'
```
### Steps to reproduce the bug
1. Install the necessary libraries, datasets having a requirement for being at least 2.19.0:
```
pip install s3fs fsspec aiohttp aiobotocore botocore 'datasets>=2.19.0'
```
2. Run this code:
```
from datasets import load_dataset
ds = load_dataset(
"json",
data_files="s3://your_path/*.jsonl.gz",
streaming=True,
split="train",
)
batch = next(iter(ds))
print(batch)
```
3. You get the `unexpected keyword argument 'requote_redirect_url'` error.
### Expected behavior
The datasets is able to load a batch from the dataset stored on S3, without triggering this `requote_redirect_url` error.
Fix: I could fix this by directly removing the `requote_redirect_url` and `trust_env` - then it loads properly.
<img width="1127" alt="image" src="https://github.com/user-attachments/assets/4c40efa9-8787-4919-b613-e4908c3d1ab2">
### Environment info
- `datasets` version: 3.1.0
- Platform: macOS-15.1-arm64-arm-64bit
- Python version: 3.10.15
- `huggingface_hub` version: 0.26.2
- PyArrow version: 18.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.9.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7295/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7295/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/4662
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4662/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4662/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4662/events
|
https://github.com/huggingface/datasets/pull/4662
| 1,298,845,369
|
PR_kwDODunzps47GTEc
| 4,662
|
Fix: conll2003 - fix empty example
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-07-08T10:49:13Z
| 2022-07-08T14:14:53Z
| 2022-07-08T14:02:42Z
|
MEMBER
| null | null | null |
As reported in https://huggingface.co/datasets/conll2003/discussions/2#62c45a14f93fc97e8260532f, there was an extra empty example at the end of the dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4662/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4662/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4662.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4662",
"merged_at": "2022-07-08T14:02:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4662.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4662"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6560
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6560/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6560/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6560/events
|
https://github.com/huggingface/datasets/issues/6560
| 2,065,637,625
|
I_kwDODunzps57HyD5
| 6,560
|
Support Video
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4",
"events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}",
"followers_url": "https://api.github.com/users/yuvalkirstain/followers",
"following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}",
"gists_url": "https://api.github.com/users/yuvalkirstain/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yuvalkirstain",
"id": 57996478,
"login": "yuvalkirstain",
"node_id": "MDQ6VXNlcjU3OTk2NDc4",
"organizations_url": "https://api.github.com/users/yuvalkirstain/orgs",
"received_events_url": "https://api.github.com/users/yuvalkirstain/received_events",
"repos_url": "https://api.github.com/users/yuvalkirstain/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yuvalkirstain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuvalkirstain/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yuvalkirstain",
"user_view_type": "public"
}
|
[
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
},
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] | null |
[
"duplicate of #5225"
] | 2024-01-04T13:10:58Z
| 2024-08-23T09:51:27Z
| 2024-08-23T09:51:27Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
HF datasets are awesome in supporting text and images. Will be great to see such a support in videos :)
### Motivation
Video generation :)
### Your contribution
Will probably be limited to raising this feature request ;)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6560/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6560/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/7375
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7375/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7375/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7375/events
|
https://github.com/huggingface/datasets/issues/7375
| 2,800,609,218
|
I_kwDODunzps6m7efC
| 7,375
|
vllm批量推理报错
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/51228154?v=4",
"events_url": "https://api.github.com/users/YuShengzuishuai/events{/privacy}",
"followers_url": "https://api.github.com/users/YuShengzuishuai/followers",
"following_url": "https://api.github.com/users/YuShengzuishuai/following{/other_user}",
"gists_url": "https://api.github.com/users/YuShengzuishuai/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/YuShengzuishuai",
"id": 51228154,
"login": "YuShengzuishuai",
"node_id": "MDQ6VXNlcjUxMjI4MTU0",
"organizations_url": "https://api.github.com/users/YuShengzuishuai/orgs",
"received_events_url": "https://api.github.com/users/YuShengzuishuai/received_events",
"repos_url": "https://api.github.com/users/YuShengzuishuai/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/YuShengzuishuai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YuShengzuishuai/subscriptions",
"type": "User",
"url": "https://api.github.com/users/YuShengzuishuai",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Make sure you have installed a recent version of `soundfile`"
] | 2025-01-21T03:22:23Z
| 2025-01-30T14:02:40Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug

### Steps to reproduce the bug

### Expected behavior

### Environment info

| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7375/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7375/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5062
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5062/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5062/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5062/events
|
https://github.com/huggingface/datasets/pull/5062
| 1,395,739,417
|
PR_kwDODunzps5AG6SA
| 5,062
|
Fix CI hfh token warning
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"good catch !"
] | 2022-10-04T06:36:54Z
| 2022-10-04T08:58:15Z
| 2022-10-04T08:42:31Z
|
MEMBER
| null | null | null |
In our CI, we get warnings from `hfh` about using deprecated `token`: https://github.com/huggingface/datasets/actions/runs/3174626525/jobs/5171672431
```
tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_private
tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub
tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_multiple_files
tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_multiple_files_with_max_shard_size
tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_overwrite_files
C:\hostedtoolcache\windows\Python\3.7.9\x64\lib\site-packages\huggingface_hub\utils\_deprecation.py:97: FutureWarning: Deprecated argument(s) used in 'dataset_info': token. Will not be supported from version '0.12'.
warnings.warn(message, FutureWarning)
```
This PR fixes the tests in `TestPushToHub` so that we fix these warnings.
Continuation of:
- #5031
CC: @lhoestq
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5062/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5062/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5062.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5062",
"merged_at": "2022-10-04T08:42:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5062.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5062"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7489
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7489/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7489/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7489/events
|
https://github.com/huggingface/datasets/pull/7489
| 2,958,204,763
|
PR_kwDODunzps6QqSRD
| 7,489
|
fix: loading of datasets from Disk(#7373)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/40773225?v=4",
"events_url": "https://api.github.com/users/sam-hey/events{/privacy}",
"followers_url": "https://api.github.com/users/sam-hey/followers",
"following_url": "https://api.github.com/users/sam-hey/following{/other_user}",
"gists_url": "https://api.github.com/users/sam-hey/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sam-hey",
"id": 40773225,
"login": "sam-hey",
"node_id": "MDQ6VXNlcjQwNzczMjI1",
"organizations_url": "https://api.github.com/users/sam-hey/orgs",
"received_events_url": "https://api.github.com/users/sam-hey/received_events",
"repos_url": "https://api.github.com/users/sam-hey/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sam-hey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sam-hey/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sam-hey",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"@nepfaff Could you confirm if this fixes the issue for you? I checked Memray, and everything looked good on my end.\r\n\r\nInstall: `pip install git+https://github.com/sam-hey/datasets.git@fix/concatenate_datasets`\r\n",
"Will aim to get to this soon. I don't have a rapid testing pipeline setup but need to wait for some AWS nodes to become free",
"I now set up a small experiment:\r\n\r\n```python\r\n# Log initial RAM usage\r\n process = psutil.Process(os.getpid())\r\n initial_ram = process.memory_info().rss / (1024 * 1024) # Convert to MB\r\n logging.info(f\"Initial RAM usage: {initial_ram:.2f} MB\")\r\n\r\n chunk_datasets = [\r\n Dataset.load_from_disk(dataset_path, keep_in_memory=False) for _ in range(N)\r\n ]\r\n combined_dataset = concatenate_datasets(chunk_datasets)\r\n\r\n # Log final RAM usage\r\n final_ram = process.memory_info().rss / (1024 * 1024) # Convert to MB\r\n ram_diff = final_ram - initial_ram\r\n logging.info(f\"Final RAM usage: {final_ram:.2f} MB\")\r\n logging.info(f\"RAM usage increase: {ram_diff:.2f} MB\")\r\n```\r\n\r\nThe RAM usage is linearly correlated with `N` on datasets master!\r\n\r\nFor my test dataset:\r\n- N=5 => RAM usage increase: 26302.91 MB\r\n- N=10 => RAM usage increase: 52315.18 MB\r\n- N=20 => RAM usage increase: 104510.65 MB\r\n- N=40 => RAM usage increase: 209166.30 MB\r\n\r\nUnfortunately, your patch doesn't seem to change this:\r\n```bash\r\npip install git+https://github.com/sam-hey/datasets.git@fix/concatenate_datasets\r\npip list | grep datasets\r\ndatasets 3.5.1.dev0\r\n```\r\nGives exactly the same RAM statistics.\r\n\r\n**Edit:** The results are a bit flawed as the memory increase all seems to come from `Dataset.load_from_disk(dataset_path, keep_in_memory=False)` here (which I don't think should happen either?) and not from `concatenate_datasets`. This seems different from my large-scale setup that runs out of memory during `concatenate_datasets` but I don't seem to be able to replicate this here...",
"Thanks a lot, @nepfaff, for taking a look at this! It seems that `concatenate_datasets()` is fixed with this PR. I can also confirm that loading a large number of files requires significant memory. However, as I understand it, this is expected/a bug since the memory consumption stems from `pa.memory_map()`, which returns a memory-mapped file.\r\n\r\nThis behavior might be related to this bug: https://github.com/apache/arrow/issues/34423 \r\n\r\n<img width=\"1728\" alt=\"Screenshot 2025-04-03 at 16 01 11\" src=\"https://github.com/user-attachments/assets/475691d8-3aba-4d7e-b8ef-5e7552c70b14\" />\r\n",
"Great ! have you tested that it also fixes the memory issue in your case @iamollas ?\r\n\r\nHappy to know that it works for you @sam-hey ! Looking forward to merging this",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7489). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-03-29T16:22:58Z
| 2025-04-24T16:36:36Z
| null |
NONE
| null | null | null |
Fixes dataset loading from disk by ensuring that memory maps and streams are properly closed.
For more details, see https://github.com/huggingface/datasets/issues/7373.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7489/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7489/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7489.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7489",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7489.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7489"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5935
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5935/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5935/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5935/events
|
https://github.com/huggingface/datasets/pull/5935
| 1,748,090,220
|
PR_kwDODunzps5Sh9Mg
| 5,935
|
Better row group size in push_to_hub
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007489 / 0.011353 (-0.003864) | 0.004914 / 0.011008 (-0.006095) | 0.111626 / 0.038508 (0.073117) | 0.037920 / 0.023109 (0.014811) | 0.350571 / 0.275898 (0.074673) | 0.389667 / 0.323480 (0.066187) | 0.006309 / 0.007986 (-0.001676) | 0.005488 / 0.004328 (0.001160) | 0.083962 / 0.004250 (0.079712) | 0.050728 / 0.037052 (0.013675) | 0.360997 / 0.258489 (0.102508) | 0.392736 / 0.293841 (0.098895) | 0.031975 / 0.128546 (-0.096571) | 0.009941 / 0.075646 (-0.065705) | 0.379840 / 0.419271 (-0.039432) | 0.056522 / 0.043533 (0.012989) | 0.359379 / 0.255139 (0.104240) | 0.384487 / 0.283200 (0.101287) | 0.117523 / 0.141683 (-0.024160) | 1.683639 / 1.452155 (0.231485) | 1.791645 / 1.492716 (0.298929) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.236862 / 0.018006 (0.218856) | 0.481208 / 0.000490 (0.480719) | 0.007455 / 0.000200 (0.007255) | 0.000111 / 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030854 / 0.037411 (-0.006557) | 0.126892 / 0.014526 (0.112367) | 0.139207 / 0.176557 (-0.037350) | 0.206447 / 0.737135 (-0.530689) | 0.143095 / 0.296338 (-0.153244) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.474677 / 0.215209 (0.259468) | 4.699534 / 2.077655 (2.621879) | 2.152102 / 1.504120 (0.647983) | 1.934815 / 1.541195 (0.393620) | 1.986448 / 1.468490 (0.517958) | 0.607184 / 4.584777 (-3.977593) | 4.480385 / 3.745712 (0.734673) | 2.074729 / 5.269862 (-3.195132) | 1.182383 / 4.565676 (-3.383294) | 0.075624 / 0.424275 (-0.348651) | 0.014046 / 0.007607 (0.006439) | 0.598859 / 0.226044 (0.372814) | 5.959551 / 2.268929 (3.690622) | 2.700851 / 55.444624 (-52.743773) | 2.303775 / 6.876477 (-4.572702) | 2.456441 / 2.142072 (0.314369) | 0.747185 / 4.805227 (-4.058042) | 0.165787 / 6.500664 (-6.334878) | 0.075817 / 0.075469 (0.000348) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.411859 / 1.841788 (-0.429928) | 17.375495 / 8.074308 (9.301187) | 15.187098 / 10.191392 (4.995706) | 0.169953 / 0.680424 (-0.510471) | 0.020204 / 0.534201 (-0.513997) | 0.461424 / 0.579283 (-0.117859) | 0.494443 / 0.434364 (0.060080) | 0.544583 / 0.540337 (0.004246) | 0.648231 / 1.386936 (-0.738705) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007785 / 0.011353 (-0.003568) | 0.005314 / 0.011008 (-0.005694) | 0.087273 / 0.038508 (0.048765) | 0.037810 / 0.023109 (0.014701) | 0.425473 / 0.275898 (0.149575) | 0.459976 / 0.323480 (0.136497) | 0.007270 / 0.007986 (-0.000716) | 0.004631 / 0.004328 (0.000303) | 0.087063 / 0.004250 (0.082812) | 0.052630 / 0.037052 (0.015578) | 0.432384 / 0.258489 (0.173895) | 0.500291 / 0.293841 (0.206450) | 0.033144 / 0.128546 (-0.095402) | 0.010101 / 0.075646 (-0.065545) | 0.096068 / 0.419271 (-0.323204) | 0.062750 / 0.043533 (0.019217) | 0.419308 / 0.255139 (0.164169) | 0.437099 / 0.283200 (0.153900) | 0.122289 / 0.141683 (-0.019394) | 1.737829 / 1.452155 (0.285674) | 1.851481 / 1.492716 (0.358765) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.014277 / 0.018006 (-0.003729) | 0.489835 / 0.000490 (0.489345) | 0.008423 / 0.000200 (0.008223) | 0.000188 / 0.000054 (0.000134) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032966 / 0.037411 (-0.004445) | 0.130069 / 0.014526 (0.115544) | 0.144372 / 0.176557 (-0.032185) | 0.200400 / 0.737135 (-0.536735) | 0.149384 / 0.296338 (-0.146954) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.511542 / 0.215209 (0.296333) | 5.093879 / 2.077655 (3.016225) | 2.572088 / 1.504120 (1.067968) | 2.339118 / 1.541195 (0.797923) | 2.441637 / 1.468490 (0.973147) | 0.614818 / 4.584777 (-3.969959) | 4.724441 / 3.745712 (0.978729) | 5.431978 / 5.269862 (0.162116) | 2.257794 / 4.565676 (-2.307883) | 0.078109 / 0.424275 (-0.346166) | 0.013821 / 0.007607 (0.006214) | 0.639232 / 0.226044 (0.413188) | 6.424623 / 2.268929 (4.155694) | 3.163018 / 55.444624 (-52.281606) | 2.756786 / 6.876477 (-4.119690) | 2.808655 / 2.142072 (0.666583) | 0.745843 / 4.805227 (-4.059385) | 0.165562 / 6.500664 (-6.335102) | 0.076610 / 0.075469 (0.001141) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.738630 / 1.841788 (-0.103158) | 18.073573 / 8.074308 (9.999265) | 16.482820 / 10.191392 (6.291428) | 0.213233 / 0.680424 (-0.467191) | 0.022839 / 0.534201 (-0.511362) | 0.487043 / 0.579283 (-0.092240) | 0.512518 / 0.434364 (0.078154) | 0.549365 / 0.540337 (0.009028) | 0.656612 / 1.386936 (-0.730324) |\n\n</details>\n</details>\n\n\n",
"Good idea!\r\n\r\nI was wondering: if we want to optimize the balance between the size of downloading a row group, and the number of rows in the group, would it make sense to compute the row group size by checking the average size of the rows?\r\n\r\neg. 32x32 images could have a larger row group size than full HD images, no? Relying on the size would even remove the need to check the column types.\r\n\r\n(in this proposal, we could use the computed row group size, eg 837, or use the nearest row group size in a list of values: 10, 100, 1000, 10000)",
"Probably, but I would go for a simpler solution first :p",
"Sure! I wanted to understand if the idea made sense or not, but it's not for this PR.",
"I think it will be more useful for people who use the viewer and won't impact sequential io that much.",
"DuckDB [paragraph](https://duckdb.org/docs/data/parquet/tips.html#selecting-a-row_group_size) that explains how to choose the `row_group_size`. Our default shard size is 500 MB in `push_to_hub`, so, ideally, we should aim for 64 MB row groups (and make this part configurable for power users 🙂).\r\n\r\nSo, before merging this PR, let's add a TODO or open an issue as a reminder that this can be improved.",
"I moved the config values, improved the features check and mentioned the improvements we could do in the docstring :)",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006211 / 0.011353 (-0.005141) | 0.004244 / 0.011008 (-0.006764) | 0.097941 / 0.038508 (0.059433) | 0.028564 / 0.023109 (0.005455) | 0.299651 / 0.275898 (0.023753) | 0.340694 / 0.323480 (0.017214) | 0.005161 / 0.007986 (-0.002824) | 0.004764 / 0.004328 (0.000435) | 0.075505 / 0.004250 (0.071255) | 0.039656 / 0.037052 (0.002603) | 0.309242 / 0.258489 (0.050753) | 0.350783 / 0.293841 (0.056942) | 0.025145 / 0.128546 (-0.103401) | 0.008498 / 0.075646 (-0.067148) | 0.317657 / 0.419271 (-0.101615) | 0.043926 / 0.043533 (0.000394) | 0.305915 / 0.255139 (0.050776) | 0.331630 / 0.283200 (0.048430) | 0.088564 / 0.141683 (-0.053119) | 1.533175 / 1.452155 (0.081021) | 1.581017 / 1.492716 (0.088301) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.206032 / 0.018006 (0.188025) | 0.433446 / 0.000490 (0.432956) | 0.003955 / 0.000200 (0.003755) | 0.000095 / 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023468 / 0.037411 (-0.013943) | 0.103292 / 0.014526 (0.088766) | 0.107234 / 0.176557 (-0.069322) | 0.168525 / 0.737135 (-0.568610) | 0.113218 / 0.296338 (-0.183120) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431085 / 0.215209 (0.215875) | 4.302082 / 2.077655 (2.224427) | 2.068290 / 1.504120 (0.564171) | 1.850718 / 1.541195 (0.309523) | 1.964261 / 1.468490 (0.495771) | 0.547562 / 4.584777 (-4.037215) | 3.410739 / 3.745712 (-0.334974) | 1.779640 / 5.269862 (-3.490221) | 1.005466 / 4.565676 (-3.560210) | 0.066250 / 0.424275 (-0.358025) | 0.011877 / 0.007607 (0.004270) | 0.525185 / 0.226044 (0.299141) | 5.234786 / 2.268929 (2.965857) | 2.398045 / 55.444624 (-53.046580) | 2.073020 / 6.876477 (-4.803457) | 2.210753 / 2.142072 (0.068680) | 0.654897 / 4.805227 (-4.150331) | 0.134639 / 6.500664 (-6.366025) | 0.067050 / 0.075469 (-0.008419) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.180210 / 1.841788 (-0.661577) | 13.613091 / 8.074308 (5.538783) | 13.441837 / 10.191392 (3.250445) | 0.146048 / 0.680424 (-0.534376) | 0.016505 / 0.534201 (-0.517696) | 0.363210 / 0.579283 (-0.216073) | 0.405484 / 0.434364 (-0.028880) | 0.428712 / 0.540337 (-0.111625) | 0.522300 / 1.386936 (-0.864636) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006147 / 0.011353 (-0.005206) | 0.004161 / 0.011008 (-0.006847) | 0.075861 / 0.038508 (0.037353) | 0.027948 / 0.023109 (0.004839) | 0.362466 / 0.275898 (0.086568) | 0.398227 / 0.323480 (0.074747) | 0.005014 / 0.007986 (-0.002972) | 0.004772 / 0.004328 (0.000444) | 0.075674 / 0.004250 (0.071423) | 0.039158 / 0.037052 (0.002106) | 0.363567 / 0.258489 (0.105078) | 0.410378 / 0.293841 (0.116537) | 0.025510 / 0.128546 (-0.103036) | 0.008528 / 0.075646 (-0.067118) | 0.081803 / 0.419271 (-0.337468) | 0.040954 / 0.043533 (-0.002579) | 0.358492 / 0.255139 (0.103353) | 0.381345 / 0.283200 (0.098145) | 0.092347 / 0.141683 (-0.049336) | 1.567695 / 1.452155 (0.115540) | 1.668412 / 1.492716 (0.175696) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203367 / 0.018006 (0.185360) | 0.424642 / 0.000490 (0.424152) | 0.002451 / 0.000200 (0.002251) | 0.000071 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026129 / 0.037411 (-0.011282) | 0.102564 / 0.014526 (0.088039) | 0.110583 / 0.176557 (-0.065973) | 0.164332 / 0.737135 (-0.572804) | 0.115706 / 0.296338 (-0.180632) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.468925 / 0.215209 (0.253716) | 4.657266 / 2.077655 (2.579612) | 2.423280 / 1.504120 (0.919160) | 2.236284 / 1.541195 (0.695089) | 2.323019 / 1.468490 (0.854529) | 0.548120 / 4.584777 (-4.036657) | 3.455602 / 3.745712 (-0.290110) | 1.730421 / 5.269862 (-3.539441) | 1.006089 / 4.565676 (-3.559588) | 0.067478 / 0.424275 (-0.356797) | 0.011465 / 0.007607 (0.003857) | 0.574235 / 0.226044 (0.348190) | 5.744404 / 2.268929 (3.475475) | 2.882225 / 55.444624 (-52.562400) | 2.618246 / 6.876477 (-4.258231) | 2.642920 / 2.142072 (0.500847) | 0.661441 / 4.805227 (-4.143787) | 0.137358 / 6.500664 (-6.363306) | 0.070372 / 0.075469 (-0.005097) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.333815 / 1.841788 (-0.507973) | 14.689667 / 8.074308 (6.615359) | 14.362294 / 10.191392 (4.170902) | 0.152011 / 0.680424 (-0.528413) | 0.016869 / 0.534201 (-0.517332) | 0.370433 / 0.579283 (-0.208851) | 0.399642 / 0.434364 (-0.034722) | 0.433759 / 0.540337 (-0.106578) | 0.525443 / 1.386936 (-0.861493) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006564 / 0.011353 (-0.004789) | 0.004350 / 0.011008 (-0.006658) | 0.096277 / 0.038508 (0.057769) | 0.032956 / 0.023109 (0.009847) | 0.303675 / 0.275898 (0.027777) | 0.336384 / 0.323480 (0.012904) | 0.005789 / 0.007986 (-0.002197) | 0.003957 / 0.004328 (-0.000371) | 0.073990 / 0.004250 (0.069740) | 0.050974 / 0.037052 (0.013922) | 0.321754 / 0.258489 (0.063265) | 0.349489 / 0.293841 (0.055648) | 0.031138 / 0.128546 (-0.097409) | 0.009000 / 0.075646 (-0.066646) | 0.325445 / 0.419271 (-0.093826) | 0.070173 / 0.043533 (0.026640) | 0.304706 / 0.255139 (0.049567) | 0.321803 / 0.283200 (0.038603) | 0.109405 / 0.141683 (-0.032278) | 1.489812 / 1.452155 (0.037657) | 1.577729 / 1.492716 (0.085013) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.287187 / 0.018006 (0.269181) | 0.527625 / 0.000490 (0.527135) | 0.006533 / 0.000200 (0.006333) | 0.000090 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026659 / 0.037411 (-0.010752) | 0.106236 / 0.014526 (0.091710) | 0.118615 / 0.176557 (-0.057941) | 0.173156 / 0.737135 (-0.563979) | 0.122883 / 0.296338 (-0.173456) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.407189 / 0.215209 (0.191980) | 4.055732 / 2.077655 (1.978078) | 1.865594 / 1.504120 (0.361474) | 1.664325 / 1.541195 (0.123130) | 1.668961 / 1.468490 (0.200471) | 0.521207 / 4.584777 (-4.063570) | 3.740424 / 3.745712 (-0.005288) | 3.431973 / 5.269862 (-1.837889) | 1.636669 / 4.565676 (-2.929008) | 0.065271 / 0.424275 (-0.359005) | 0.012151 / 0.007607 (0.004544) | 0.514233 / 0.226044 (0.288189) | 5.110150 / 2.268929 (2.841222) | 2.264340 / 55.444624 (-53.180284) | 1.940428 / 6.876477 (-4.936049) | 2.042286 / 2.142072 (-0.099787) | 0.639200 / 4.805227 (-4.166028) | 0.139537 / 6.500664 (-6.361127) | 0.063195 / 0.075469 (-0.012274) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.179501 / 1.841788 (-0.662286) | 14.600133 / 8.074308 (6.525825) | 14.902137 / 10.191392 (4.710745) | 0.144509 / 0.680424 (-0.535915) | 0.017449 / 0.534201 (-0.516752) | 0.393135 / 0.579283 (-0.186148) | 0.413103 / 0.434364 (-0.021261) | 0.459897 / 0.540337 (-0.080440) | 0.552602 / 1.386936 (-0.834334) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006891 / 0.011353 (-0.004462) | 0.004633 / 0.011008 (-0.006375) | 0.073093 / 0.038508 (0.034585) | 0.032509 / 0.023109 (0.009399) | 0.348332 / 0.275898 (0.072434) | 0.381920 / 0.323480 (0.058440) | 0.005978 / 0.007986 (-0.002007) | 0.005360 / 0.004328 (0.001032) | 0.074307 / 0.004250 (0.070056) | 0.049668 / 0.037052 (0.012615) | 0.354713 / 0.258489 (0.096224) | 0.398521 / 0.293841 (0.104681) | 0.032013 / 0.128546 (-0.096534) | 0.008890 / 0.075646 (-0.066756) | 0.080013 / 0.419271 (-0.339259) | 0.051820 / 0.043533 (0.008288) | 0.349730 / 0.255139 (0.094591) | 0.369267 / 0.283200 (0.086067) | 0.103874 / 0.141683 (-0.037809) | 1.484148 / 1.452155 (0.031993) | 1.573927 / 1.492716 (0.081211) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.009699 / 0.018006 (-0.008307) | 0.511176 / 0.000490 (0.510686) | 0.002938 / 0.000200 (0.002738) | 0.000109 / 0.000054 (0.000054) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027847 / 0.037411 (-0.009564) | 0.111565 / 0.014526 (0.097039) | 0.120625 / 0.176557 (-0.055932) | 0.172130 / 0.737135 (-0.565006) | 0.125949 / 0.296338 (-0.170389) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.430634 / 0.215209 (0.215424) | 4.315377 / 2.077655 (2.237722) | 2.070764 / 1.504120 (0.566644) | 1.881962 / 1.541195 (0.340767) | 1.904053 / 1.468490 (0.435563) | 0.524973 / 4.584777 (-4.059804) | 3.718359 / 3.745712 (-0.027353) | 3.415344 / 5.269862 (-1.854518) | 1.224568 / 4.565676 (-3.341108) | 0.065593 / 0.424275 (-0.358682) | 0.011643 / 0.007607 (0.004036) | 0.537050 / 0.226044 (0.311006) | 5.352155 / 2.268929 (3.083226) | 2.557361 / 55.444624 (-52.887263) | 2.217770 / 6.876477 (-4.658707) | 2.194975 / 2.142072 (0.052902) | 0.635142 / 4.805227 (-4.170085) | 0.140642 / 6.500664 (-6.360022) | 0.064690 / 0.075469 (-0.010779) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.266125 / 1.841788 (-0.575663) | 14.836413 / 8.074308 (6.762105) | 14.446870 / 10.191392 (4.255478) | 0.191545 / 0.680424 (-0.488878) | 0.017433 / 0.534201 (-0.516768) | 0.392296 / 0.579283 (-0.186987) | 0.420698 / 0.434364 (-0.013666) | 0.463225 / 0.540337 (-0.077112) | 0.556127 / 1.386936 (-0.830809) |\n\n</details>\n</details>\n\n\n"
] | 2023-06-08T15:01:15Z
| 2023-06-09T17:47:37Z
| 2023-06-09T17:40:09Z
|
MEMBER
| null | null | null |
This is a very simple change that improves `to_parquet` to use a more reasonable row group size for image and audio datasets.
This is especially useful for `push_to_hub` and will provide a better experience with the dataset viewer on HF
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5935/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5935/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5935.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5935",
"merged_at": "2023-06-09T17:40:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5935.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5935"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6203
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6203/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6203/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6203/events
|
https://github.com/huggingface/datasets/issues/6203
| 1,877,491,602
|
I_kwDODunzps5v6D-S
| 6,203
|
Support loading from a DVC remote repository
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16692099?v=4",
"events_url": "https://api.github.com/users/bilelomrani1/events{/privacy}",
"followers_url": "https://api.github.com/users/bilelomrani1/followers",
"following_url": "https://api.github.com/users/bilelomrani1/following{/other_user}",
"gists_url": "https://api.github.com/users/bilelomrani1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bilelomrani1",
"id": 16692099,
"login": "bilelomrani1",
"node_id": "MDQ6VXNlcjE2NjkyMDk5",
"organizations_url": "https://api.github.com/users/bilelomrani1/orgs",
"received_events_url": "https://api.github.com/users/bilelomrani1/received_events",
"repos_url": "https://api.github.com/users/bilelomrani1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bilelomrani1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bilelomrani1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bilelomrani1",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] | null |
[
"(cross-posting from the linked DVC issue)\r\n\r\nI think this should already work out of the box with the current `datasets` and `dvc.api` releases by passing the correct `storage_options` into the datasets calls. `storage_options` is essentially just the kwargs dict that gets passed to the fsspec fs constructor.\r\n\r\nThe main thing to note here is that the fsspec DVCFileSystem URL should be `dvc://folder/file.json` (i.e. this should be the DVCFileSystem path that is relative to the DVC repo root). You cannot use a URL like `https://gitlab.com/user/repo/folder/file.json`.\r\n\r\nI think something like this should work for you (in a venv where both DVC and datasets are installed):\r\n```python\r\nimport datasets\r\n\r\n# load a dataset from Git/DVC repository where Git repo is located at https://gitlab.com/user/repo.git\r\n# and path to dataset (relative to git/dvc repo root) is 'folder/file.json'\r\ndatasets.load_from_disk(\r\n \"dvc://folder/file.json\",\r\n storage_options={\"url\": \"https://gitlab.com/user/repo.git\"},\r\n)\r\n```\r\n\r\nbasically the `dvc://` is what tells fsspec to create a `DVCFileSystem` and it will construct it like\r\n```python\r\nfs = DVCFileSystem(**storage_options)\r\n```\r\n\r\nThen the subsequent calls use the rest of the `dvc://...` URL like \r\n```python\r\nfs.exists(\"folder/file.json\")\r\n```",
"Hi @pmrowla Thank you for your help, that's very helpful, I was indeed using `fsspec` incorrectly here. There is still an issue with `datasets`:\r\n\r\n```python\r\nimport datasets\r\ndataset = datasets.load_dataset(\"json\", data_files=\"dvc://folder/file.jsonl\", storage_options={\"url\": \"https://gitlab.com/repo/folder/\"})\r\n```\r\n\r\nresults in the following exception:\r\n\r\n```\r\nTraceback (most recent call last): \r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/scmrepo/fs.py\", line 217, in info\r\n ret = self.trie.info(key)\r\n ^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/scmrepo/git/objects.py\", line 141, in info\r\n obj = self.trie[key]\r\n ~~~~~~~~~^^^^^\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/pygtrie.py\", line 937, in __getitem__\r\n node, _ = self._get_node(key_or_slice)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/pygtrie.py\", line 630, in _get_node\r\n raise KeyError(key)\r\nKeyError: ('dvc:', 'datasets', 'spider', 'train.jsonl')\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/datasets/load.py\", line 2129, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/datasets/load.py\", line 1815, in load_dataset_builder\r\n dataset_module = dataset_module_factory(\r\n ^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/datasets/load.py\", line 1430, in dataset_module_factory\r\n ).get_module()\r\n ^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/datasets/load.py\", line 958, in get_module\r\n data_files = DataFilesDict.from_patterns(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/datasets/data_files.py\", line 674, in from_patterns\r\n DataFilesList.from_patterns(\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/datasets/data_files.py\", line 589, in from_patterns\r\n origin_metadata = _get_origin_metadata(data_files, download_config=download_config)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/datasets/data_files.py\", line 504, in _get_origin_metadata\r\n return thread_map(\r\n ^^^^^^^^^^^\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/tqdm/contrib/concurrent.py\", line 69, in thread_map\r\n return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/tqdm/contrib/concurrent.py\", line 51, in _executor_map\r\n return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/.pyenv/versions/3.11.4/lib/python3.11/concurrent/futures/_base.py\", line 619, in result_iterator\r\n yield _result_or_cancel(fs.pop())\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/.pyenv/versions/3.11.4/lib/python3.11/concurrent/futures/_base.py\", line 317, in _result_or_cancel\r\n return fut.result(timeout)\r\n ^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/.pyenv/versions/3.11.4/lib/python3.11/concurrent/futures/_base.py\", line 456, in result\r\n return self.__get_result()\r\n ^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/.pyenv/versions/3.11.4/lib/python3.11/concurrent/futures/_base.py\", line 401, in __get_result\r\n raise self._exception\r\n File \"/Users/bilelomrani/.pyenv/versions/3.11.4/lib/python3.11/concurrent/futures/thread.py\", line 58, in run\r\n result = self.fn(*self.args, **self.kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/datasets/data_files.py\", line 491, in _get_single_origin_metadata\r\n info = fs.info(data_file)\r\n ^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/dvc/fs/dvc.py\", line 357, in info\r\n return self._info(key, path, ignore_subrepos=ignore_subrepos)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/dvc/fs/dvc.py\", line 377, in _info\r\n fs_info = fs.info(fs_path)\r\n ^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/dvc_objects/fs/base.py\", line 501, in info\r\n return self.fs.info(path, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/scmrepo/fs.py\", line 221, in info\r\n raise FileNotFoundError(errno.ENOENT, os.strerror(errno.ENOENT), path)\r\nFileNotFoundError: [Errno 2] No such file or directory: '/dvc:/folder/file.jsonl'\r\n```\r\n\r\nSomehow the URL gets turned into `/dvc:/folder/file.jsonl` inside `datasets`. Otherwise I can confirm that using `fsspec` properly with DVC works as expected.\r\n",
"For the record, there was a `dvc.api.DVCFileSystem` bug which is fixed in DVC `main` and will be available in the next DVC release.\r\n\r\nTo use DVC with `datasets` you just need to pass the Git/DVC repo `url` in `storage_options` as discussed above.\r\n\r\n(note that this requires having both `datasets` and `dvc` installed in your python environment)\r\n```python\r\n>>> from datasets import load_dataset\r\n>>> load_dataset(\r\n... \"json\",\r\n... data_files=\"dvc://eval/metrics.json\",\r\n... storage_options={\"url\": \"https://github.com/iterative/example-get-started.git\"},\r\n... )\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['avg_prec', 'roc_auc'],\r\n num_rows: 1\r\n })\r\n})\r\n```\r\n\r\nAny additional `DVCFileSystem` args can be passed in the same way, so to get a specific branch/tag/commit from the DVC repo you just need to specify the `rev` in `storage_options` like\r\n```\r\nstorage_options={\"url\": \"https://github.com/iterative/example-get-started.git\", \"rev\": \"main\"}\r\n```\r\n\r\nI think this issue can probably be closed now.",
"Thank you for your help, closing."
] | 2023-09-01T14:04:52Z
| 2023-09-15T15:11:27Z
| 2023-09-15T15:11:27Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
Adding support for loading a file from a DVC repository, tracked remotely on a SCM.
### Motivation
DVC is a popular version control system to version and manage datasets. The files are stored on a remote object storage platform, but they are tracked using Git. Integration with DVC is possible through the `DVCFileSystem`.
I have a Gitlab repository where multiple files are tracked using DVC and stored in a GCP bucket. I would like to be able to load these files using `datasets` directly using an URL. My goal is to write a generic code that abstracts the storage layer, such that my users will only have to pass in an `fsspec`-compliant URL and the corresponding files will be loaded.
### Your contribution
I managed to instantiate a `DVCFileSystem` pointing to a Gitlab repo from a `fsspec` chained URL in [this pull request](https://github.com/iterative/dvc/pull/9903) to DVC.
```python
from fsspec.core import url_to_fs
fs, _ = url_to_fs("dvc::https://gitlab.com/repository/group/my-repo")
```
From now I'm not sure how to continue, it seems that `datasets` expects the URL to be fully qualified like so: `dvc::https://gitlab.com/repository/group/my-repo/my-folder/my-file.json` but this fails because `DVCFileSystem` expects the URL to point to the root of an SCM repo. Is there a way to make this work with `datasets`?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16692099?v=4",
"events_url": "https://api.github.com/users/bilelomrani1/events{/privacy}",
"followers_url": "https://api.github.com/users/bilelomrani1/followers",
"following_url": "https://api.github.com/users/bilelomrani1/following{/other_user}",
"gists_url": "https://api.github.com/users/bilelomrani1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bilelomrani1",
"id": 16692099,
"login": "bilelomrani1",
"node_id": "MDQ6VXNlcjE2NjkyMDk5",
"organizations_url": "https://api.github.com/users/bilelomrani1/orgs",
"received_events_url": "https://api.github.com/users/bilelomrani1/received_events",
"repos_url": "https://api.github.com/users/bilelomrani1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bilelomrani1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bilelomrani1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bilelomrani1",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6203/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6203/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5889
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5889/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5889/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5889/events
|
https://github.com/huggingface/datasets/issues/5889
| 1,722,373,618
|
I_kwDODunzps5mqVXy
| 5,889
|
Token Alignment for input and output data over train and test batch/dataset.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/125154243?v=4",
"events_url": "https://api.github.com/users/akesh1235/events{/privacy}",
"followers_url": "https://api.github.com/users/akesh1235/followers",
"following_url": "https://api.github.com/users/akesh1235/following{/other_user}",
"gists_url": "https://api.github.com/users/akesh1235/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/akesh1235",
"id": 125154243,
"login": "akesh1235",
"node_id": "U_kgDOB3Wzww",
"organizations_url": "https://api.github.com/users/akesh1235/orgs",
"received_events_url": "https://api.github.com/users/akesh1235/received_events",
"repos_url": "https://api.github.com/users/akesh1235/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/akesh1235/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akesh1235/subscriptions",
"type": "User",
"url": "https://api.github.com/users/akesh1235",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2023-05-23T15:58:55Z
| 2023-05-23T15:58:55Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
`data`
> DatasetDict({
train: Dataset({
features: ['input', 'output'],
num_rows: 4500
})
test: Dataset({
features: ['input', 'output'],
num_rows: 500
})
})
**# input (in-correct sentence)**
`data['train'][0]['input']`
**>>** 'We are meet sunday 10am12pmET in Crown Heights Brooklyn New York'
**# output (correct sentence)**
`data['train'][0]['output']`
**>>** 'We meet Sundays 10am-12pmET in Crown Heights, Brooklyn, New York.'
**I Want to align the output tokens with input**
```
`# tokenize both inputs and targets
def tokenize_fn(batch):
# tokenize the input sequence first
# this populates input_ids, attention_mask, etc.
tokenized_inputs = tokenizer(
batch['input']
)
labels_batch = tokenizer.tokenize(batch['output']) # original targets
aligned_labels_batch = []
for i, labels in enumerate(labels_batch):
word_ids = tokenized_inputs[i].word_ids()
aligned_labels_batch.append(align_targets(labels, word_ids)) # align_targets is another user defined function which is been called here
# recall: the 'target' must be stored in key called 'labels'
tokenized_inputs['labels'] = aligned_labels_batch
return tokenized_inputs`
```
```
data.map(
tokenize_fn,
batched=True,
remove_columns=data['train'].column_names,
)
```
When this user defined function is mapped to every records of train and test batch am getting following error:
**1.** **raise DatasetTransformationNotAllowedError(
3457 "Using `.map` in batched mode on a dataset with attached indexes is allowed only if it doesn't create or remove existing examples. You can first run `.drop_index() to remove your index and then re-add it."**
**2.** **TypeError: TextEncodeInput must be Union[TextInputSequence, Tuple[InputSequence, InputSequence]]**
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5889/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5889/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/7214
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7214/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7214/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7214/events
|
https://github.com/huggingface/datasets/issues/7214
| 2,578,743,713
|
I_kwDODunzps6ZtIGh
| 7,214
|
Formatted map + with_format(None) changes array dtype for iterable datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4",
"events_url": "https://api.github.com/users/alex-hh/events{/privacy}",
"followers_url": "https://api.github.com/users/alex-hh/followers",
"following_url": "https://api.github.com/users/alex-hh/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alex-hh",
"id": 5719745,
"login": "alex-hh",
"node_id": "MDQ6VXNlcjU3MTk3NDU=",
"organizations_url": "https://api.github.com/users/alex-hh/orgs",
"received_events_url": "https://api.github.com/users/alex-hh/received_events",
"repos_url": "https://api.github.com/users/alex-hh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alex-hh",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"possibly due to this logic:\r\n\r\n```python\r\n def _arrow_array_to_numpy(self, pa_array: pa.Array) -> np.ndarray:\r\n if isinstance(pa_array, pa.ChunkedArray):\r\n if isinstance(pa_array.type, _ArrayXDExtensionType):\r\n # don't call to_pylist() to preserve dtype of the fixed-size array\r\n zero_copy_only = _is_zero_copy_only(pa_array.type.storage_dtype, unnest=True)\r\n array: List = [\r\n row for chunk in pa_array.chunks for row in chunk.to_numpy(zero_copy_only=zero_copy_only)\r\n ]\r\n else:\r\n zero_copy_only = _is_zero_copy_only(pa_array.type) and all(\r\n not _is_array_with_nulls(chunk) for chunk in pa_array.chunks\r\n )\r\n array: List = [\r\n row for chunk in pa_array.chunks for row in chunk.to_numpy(zero_copy_only=zero_copy_only)\r\n ]\r\n else:\r\n if isinstance(pa_array.type, _ArrayXDExtensionType):\r\n # don't call to_pylist() to preserve dtype of the fixed-size array\r\n zero_copy_only = _is_zero_copy_only(pa_array.type.storage_dtype, unnest=True)\r\n array: List = pa_array.to_numpy(zero_copy_only=zero_copy_only)\r\n else:\r\n zero_copy_only = _is_zero_copy_only(pa_array.type) and not _is_array_with_nulls(pa_array)\r\n array: List = pa_array.to_numpy(zero_copy_only=zero_copy_only).tolist()\r\n```"
] | 2024-10-10T12:45:16Z
| 2024-10-12T16:55:57Z
| null |
CONTRIBUTOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
When applying with_format -> map -> with_format(None), array dtypes seem to change, even if features are passed
### Steps to reproduce the bug
```python
features=Features(**{"array0": Array3D((None, 10, 10), dtype="float32")})
dataset = Dataset.from_dict({f"array0": [np.zeros((100,10,10), dtype=np.float32)]*25}, features=features)
ds = dataset.to_iterable_dataset().with_format("numpy").map(lambda x: x, features=features)
ex_0 = next(iter(ds))
ds = dataset.to_iterable_dataset().with_format("numpy").map(lambda x: x, features=features).with_format(None)
ex_1 = next(iter(ds))
assert ex_1["array0"].dtype == ex_0["array0"].dtype, f"{ex_1['array0'].dtype} {ex_0['array0'].dtype}"
```
### Expected behavior
Dtypes should be preserved.
### Environment info
3.0.2
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7214/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7214/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6775
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6775/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6775/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6775/events
|
https://github.com/huggingface/datasets/issues/6775
| 2,223,457,792
|
I_kwDODunzps6Eh0YA
| 6,775
|
IndexError: Invalid key: 0 is out of bounds for size 0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/38481564?v=4",
"events_url": "https://api.github.com/users/kk2491/events{/privacy}",
"followers_url": "https://api.github.com/users/kk2491/followers",
"following_url": "https://api.github.com/users/kk2491/following{/other_user}",
"gists_url": "https://api.github.com/users/kk2491/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kk2491",
"id": 38481564,
"login": "kk2491",
"node_id": "MDQ6VXNlcjM4NDgxNTY0",
"organizations_url": "https://api.github.com/users/kk2491/orgs",
"received_events_url": "https://api.github.com/users/kk2491/received_events",
"repos_url": "https://api.github.com/users/kk2491/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kk2491/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kk2491/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kk2491",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Same problem.",
"Hi! You should be able to fix this by passing `remove_unused_columns=False` to the `transformers` `TrainingArguments` as explained in https://github.com/huggingface/peft/issues/1299.\r\n\r\n(I'm not familiar with Vertex AI, but I'd assume `remove_unused_columns` can be passed as a flag to the docker container) ",
"I had the same problem, but I spent a whole day trying different combination with my own dataset with the example data set and found the reason: the example data is multi-turn conversation between human and assistant, so # Humman or # Assistant appear at least twice. If your own custom data only has single turn conversation, it might end up with the same error. What you can do is repeat your single turn conversation twice in your training data (keep the key 'text' the same) and maybe it works. I guess the reason is the specific way processing the data requires and counts multi-turn only (single turn will be discarded so it ends up with no training data), but since I am using Google Vertex AI, I don't have direct access to the underlying code so that was just my guess. ",
"> Hi! You should be able to fix this by passing `remove_unused_columns=False` to the `transformers` `TrainingArguments` as explained in [huggingface/peft#1299](https://github.com/huggingface/peft/issues/1299).\r\n> \r\n> (I'm not familiar with Vertex AI, but I'd assume `remove_unused_columns` can be passed as a flag to the docker container)\r\n\r\n@mariosasko Thanks for the response and suggestion. \r\nWhen I set `remove_unused_columns` as `False` , I end up getting different error (will post the error soon). \r\nEither the Vertex-AI does not support `remove_unused_columns` or my dataset is completely wrong. \r\n\r\nThank you, \r\nKK",
"> I had the same problem, but I spent a whole day trying different combination with my own dataset with the example data set and found the reason: the example data is multi-turn conversation between human and assistant, so # Humman or # Assistant appear at least twice. If your own custom data only has single turn conversation, it might end up with the same error. What you can do is repeat your single turn conversation twice in your training data (keep the key 'text' the same) and maybe it works. I guess the reason is the specific way processing the data requires and counts multi-turn only (single turn will be discarded so it ends up with no training data), but since I am using Google Vertex AI, I don't have direct access to the underlying code so that was just my guess.\r\n\r\n@cyberyu Thanks for your suggestions. \r\nI have tried the approach you suggested, copied the same conversation in each jsonl element so every jsonl item has 2 `HUMAN` and `ASSISTANT`. \r\nHowever in my case, the issue persists. I am gonna give few more tries, and post the results here. \r\nYou can find my dataset [here](https://huggingface.co/datasets/kk2491/test/tree/main) \r\n\r\nThank you, \r\nKK ",
"> > I had the same problem, but I spent a whole day trying different combination with my own dataset with the example data set and found the reason: the example data is multi-turn conversation between human and assistant, so # Humman or # Assistant appear at least twice. If your own custom data only has single turn conversation, it might end up with the same error. What you can do is repeat your single turn conversation twice in your training data (keep the key 'text' the same) and maybe it works. I guess the reason is the specific way processing the data requires and counts multi-turn only (single turn will be discarded so it ends up with no training data), but since I am using Google Vertex AI, I don't have direct access to the underlying code so that was just my guess.\r\n> \r\n> @cyberyu Thanks for your suggestions. I have tried the approach you suggested, copied the same conversation in each jsonl element so every jsonl item has 2 `HUMAN` and `ASSISTANT`. However in my case, the issue persists. I am gonna give few more tries, and post the results here. You can find my dataset [here](https://huggingface.co/datasets/kk2491/test/tree/main)\r\n> \r\n> Thank you, KK\r\n\r\nI think another reason is your training sample length is too short. I saw a relevant report (https://discuss.huggingface.co/t/indexerror-invalid-key-16-is-out-of-bounds-for-size-0/14298/16) stating that the processing code might have a bug discarding sequence length short than max_seq_length, which is 512. Not sure the Vertex AI backend code has fixed that bug or not. So I tried to add some garbage content in your data, and extended the length longer than 512 for a single turn, and repeated twice. You can copy the following line as 5 repeated lines as your training data jsonl file of five samples (no eval or test needed, for speed up, set evaluation step to 5 and training step to 10,), and it will pass.\r\n\r\n{\"text\":\"### Human: You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You will handle customers queries and provide effective help message. Please provide response to 'Can Interplai software optimize routes for minimizing package handling and transfer times in distribution centers'? ### Assistant: Yes, Interplai software can optimize routes for distribution centers by streamlining package handling processes, minimizing transfer times between loading docks and storage areas, and optimizing warehouse layouts for efficient order fulfillment. ### Human: You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You are a helpful AI Assistant familiar with customer service. You will handle customers queries and provide effective help message. Please provide response to 'Can Interplai software optimize routes for minimizing package handling and transfer times in distribution centers'? ### Assistant: Yes, Interplai software can optimize routes for distribution centers by streamlining package handling processes, minimizing transfer times between loading docks and storage areas, and optimizing warehouse layouts for efficient order fulfillment.\"}\r\n",
"@cyberyu **Thank you so much, You saved my day (+ so many days)**. \r\nI tried the example you provided above, and the training is successfully completed in Vertex-AI (through GUI). \r\nI never thought there would be constraints on the length of the samples and also on the number of turns. \r\nI will update my complete dataset and see update here once the training is completed. \r\n\r\nThank you, \r\nKK "
] | 2024-04-03T17:06:30Z
| 2024-04-08T01:24:35Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I am trying to fine-tune llama2-7b model in GCP. The notebook I am using for this can be found [here](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/model_garden/model_garden_pytorch_llama2_peft_finetuning.ipynb).
When I use the dataset given in the example, the training gets successfully completed (example dataset can be found [here](https://huggingface.co/datasets/timdettmers/openassistant-guanaco)).
However when I use my own dataset which is in the same format as the example dataset, I get the below error (my dataset can be found [here](https://huggingface.co/datasets/kk2491/finetune_dataset_002)).

I see the files are being read correctly from the logs:

### Steps to reproduce the bug
1. Clone the [vertex-ai-samples](https://github.com/GoogleCloudPlatform/vertex-ai-samples) repository.
2. Run the [llama2-7b peft fine-tuning](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/model_garden/model_garden_pytorch_llama2_peft_finetuning.ipynb).
3. Change the dataset `kk2491/finetune_dataset_002`
### Expected behavior
The training should complete successfully, and model gets deployed to an endpoint.
### Environment info
Python version : Python 3.10.12
Dataset : https://huggingface.co/datasets/kk2491/finetune_dataset_002
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6775/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6775/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5155
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5155/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5155/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5155/events
|
https://github.com/huggingface/datasets/pull/5155
| 1,421,278,748
|
PR_kwDODunzps5BcCYr
| 5,155
|
TextConfig: added "errors"
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/36224762?v=4",
"events_url": "https://api.github.com/users/NightMachinery/events{/privacy}",
"followers_url": "https://api.github.com/users/NightMachinery/followers",
"following_url": "https://api.github.com/users/NightMachinery/following{/other_user}",
"gists_url": "https://api.github.com/users/NightMachinery/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/NightMachinery",
"id": 36224762,
"login": "NightMachinery",
"node_id": "MDQ6VXNlcjM2MjI0NzYy",
"organizations_url": "https://api.github.com/users/NightMachinery/orgs",
"received_events_url": "https://api.github.com/users/NightMachinery/received_events",
"repos_url": "https://api.github.com/users/NightMachinery/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/NightMachinery/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NightMachinery/subscriptions",
"type": "User",
"url": "https://api.github.com/users/NightMachinery",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for adding this ! You can fix the CI by formatting your code using the `make style` command :)",
"[**@lhoestq**](https://github.com/lhoestq) commented on [Oct 27, 2022, 4:08 PM GMT+3:30](https://github.com/huggingface/datasets/pull/5155#issuecomment-1293464680 \"2022-10-27T12:38:04Z - Replied by Github Reply Comments\"):\r\n> Thanks for adding this ! You can fix the CI by formatting your code using the `make style` command :)\r\n\r\nI ran this and force pushed the changes."
] | 2022-10-24T18:56:52Z
| 2022-11-03T13:38:13Z
| 2022-11-03T13:35:35Z
|
CONTRIBUTOR
| null | null | null |
This patch adds the ability to set the `errors` option of `open` for loading text datasets. I needed it because some data I had scraped had bad bytes in it, so I needed `errors='ignore'`.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5155/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5155/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5155.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5155",
"merged_at": "2022-11-03T13:35:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5155.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5155"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6228
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6228/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6228/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6228/events
|
https://github.com/huggingface/datasets/pull/6228
| 1,887,959,311
|
PR_kwDODunzps5Z5HZi
| 6,228
|
Remove RGB -> BGR image conversion in Object Detection tutorial
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009443 / 0.011353 (-0.001910) | 0.005274 / 0.011008 (-0.005734) | 0.105950 / 0.038508 (0.067441) | 0.079947 / 0.023109 (0.056837) | 0.414248 / 0.275898 (0.138350) | 0.440611 / 0.323480 (0.117131) | 0.006779 / 0.007986 (-0.001206) | 0.004301 / 0.004328 (-0.000028) | 0.080616 / 0.004250 (0.076366) | 0.061425 / 0.037052 (0.024372) | 0.418460 / 0.258489 (0.159971) | 0.468108 / 0.293841 (0.174267) | 0.051090 / 0.128546 (-0.077456) | 0.014133 / 0.075646 (-0.061513) | 0.376121 / 0.419271 (-0.043151) | 0.070715 / 0.043533 (0.027182) | 0.415435 / 0.255139 (0.160296) | 0.457925 / 0.283200 (0.174725) | 0.053653 / 0.141683 (-0.088030) | 1.872681 / 1.452155 (0.420527) | 1.961187 / 1.492716 (0.468470) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.255829 / 0.018006 (0.237823) | 0.574224 / 0.000490 (0.573735) | 0.007597 / 0.000200 (0.007397) | 0.000098 / 0.000054 (0.000044) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032562 / 0.037411 (-0.004849) | 0.097528 / 0.014526 (0.083003) | 0.113487 / 0.176557 (-0.063070) | 0.185670 / 0.737135 (-0.551465) | 0.118909 / 0.296338 (-0.177430) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.611441 / 0.215209 (0.396232) | 5.908576 / 2.077655 (3.830921) | 2.586758 / 1.504120 (1.082638) | 2.310199 / 1.541195 (0.769004) | 2.333396 / 1.468490 (0.864906) | 0.900884 / 4.584777 (-3.683893) | 5.438304 / 3.745712 (1.692591) | 4.806611 / 5.269862 (-0.463250) | 2.970631 / 4.565676 (-1.595046) | 0.097861 / 0.424275 (-0.326414) | 0.009873 / 0.007607 (0.002266) | 0.739553 / 0.226044 (0.513509) | 7.104953 / 2.268929 (4.836024) | 3.150128 / 55.444624 (-52.294497) | 2.469552 / 6.876477 (-4.406924) | 2.709206 / 2.142072 (0.567133) | 0.983081 / 4.805227 (-3.822147) | 0.205150 / 6.500664 (-6.295514) | 0.075947 / 0.075469 (0.000478) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.631255 / 1.841788 (-0.210532) | 24.213679 / 8.074308 (16.139370) | 21.514481 / 10.191392 (11.323089) | 0.220360 / 0.680424 (-0.460063) | 0.031663 / 0.534201 (-0.502538) | 0.516029 / 0.579283 (-0.063254) | 0.591461 / 0.434364 (0.157097) | 0.612398 / 0.540337 (0.072061) | 0.807609 / 1.386936 (-0.579328) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009443 / 0.011353 (-0.001910) | 0.005510 / 0.011008 (-0.005498) | 0.085722 / 0.038508 (0.047214) | 0.076256 / 0.023109 (0.053146) | 0.604248 / 0.275898 (0.328349) | 0.596222 / 0.323480 (0.272742) | 0.006786 / 0.007986 (-0.001200) | 0.004135 / 0.004328 (-0.000193) | 0.085934 / 0.004250 (0.081683) | 0.065890 / 0.037052 (0.028838) | 0.592080 / 0.258489 (0.333591) | 0.624560 / 0.293841 (0.330719) | 0.048200 / 0.128546 (-0.080346) | 0.015477 / 0.075646 (-0.060169) | 0.097042 / 0.419271 (-0.322230) | 0.060513 / 0.043533 (0.016981) | 0.557171 / 0.255139 (0.302032) | 0.582057 / 0.283200 (0.298858) | 0.035678 / 0.141683 (-0.106005) | 1.894947 / 1.452155 (0.442792) | 1.956652 / 1.492716 (0.463936) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.268927 / 0.018006 (0.250921) | 0.566086 / 0.000490 (0.565597) | 0.007190 / 0.000200 (0.006990) | 0.000101 / 0.000054 (0.000047) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.042090 / 0.037411 (0.004679) | 0.109618 / 0.014526 (0.095092) | 0.126588 / 0.176557 (-0.049968) | 0.200426 / 0.737135 (-0.536709) | 0.127032 / 0.296338 (-0.169306) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.669773 / 0.215209 (0.454564) | 6.453417 / 2.077655 (4.375763) | 3.119147 / 1.504120 (1.615027) | 2.818632 / 1.541195 (1.277437) | 2.930880 / 1.468490 (1.462390) | 0.922164 / 4.584777 (-3.662612) | 5.769564 / 3.745712 (2.023852) | 4.885108 / 5.269862 (-0.384754) | 3.041640 / 4.565676 (-1.524037) | 0.100186 / 0.424275 (-0.324090) | 0.009417 / 0.007607 (0.001810) | 0.783138 / 0.226044 (0.557094) | 8.113361 / 2.268929 (5.844432) | 4.018630 / 55.444624 (-51.425995) | 3.246772 / 6.876477 (-3.629704) | 3.520690 / 2.142072 (1.378618) | 1.063686 / 4.805227 (-3.741541) | 0.218667 / 6.500664 (-6.281997) | 0.084169 / 0.075469 (0.008700) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.791949 / 1.841788 (-0.049839) | 23.148341 / 8.074308 (15.074033) | 23.321125 / 10.191392 (13.129733) | 0.245391 / 0.680424 (-0.435032) | 0.031911 / 0.534201 (-0.502290) | 0.470707 / 0.579283 (-0.108576) | 0.608195 / 0.434364 (0.173832) | 0.559590 / 0.540337 (0.019253) | 0.786007 / 1.386936 (-0.600929) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008428 / 0.011353 (-0.002925) | 0.004064 / 0.011008 (-0.006944) | 0.088421 / 0.038508 (0.049913) | 0.078042 / 0.023109 (0.054933) | 0.306356 / 0.275898 (0.030458) | 0.349766 / 0.323480 (0.026286) | 0.004086 / 0.007986 (-0.003900) | 0.003900 / 0.004328 (-0.000428) | 0.068379 / 0.004250 (0.064129) | 0.056214 / 0.037052 (0.019161) | 0.310211 / 0.258489 (0.051722) | 0.363692 / 0.293841 (0.069851) | 0.050421 / 0.128546 (-0.078125) | 0.011661 / 0.075646 (-0.063985) | 0.298400 / 0.419271 (-0.120871) | 0.063503 / 0.043533 (0.019970) | 0.339799 / 0.255139 (0.084660) | 0.359479 / 0.283200 (0.076279) | 0.039265 / 0.141683 (-0.102418) | 1.390578 / 1.452155 (-0.061576) | 1.573333 / 1.492716 (0.080617) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.260442 / 0.018006 (0.242436) | 0.560390 / 0.000490 (0.559900) | 0.003926 / 0.000200 (0.003726) | 0.000083 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025809 / 0.037411 (-0.011602) | 0.081902 / 0.014526 (0.067376) | 0.093655 / 0.176557 (-0.082901) | 0.149432 / 0.737135 (-0.587703) | 0.099059 / 0.296338 (-0.197279) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.505644 / 0.215209 (0.290435) | 5.108292 / 2.077655 (3.030638) | 2.121689 / 1.504120 (0.617569) | 1.846576 / 1.541195 (0.305381) | 1.836587 / 1.468490 (0.368097) | 0.708088 / 4.584777 (-3.876689) | 4.562630 / 3.745712 (0.816918) | 3.934747 / 5.269862 (-1.335115) | 2.453409 / 4.565676 (-2.112267) | 0.081908 / 0.424275 (-0.342367) | 0.012996 / 0.007607 (0.005389) | 0.636588 / 0.226044 (0.410544) | 6.361086 / 2.268929 (4.092157) | 2.911681 / 55.444624 (-52.532943) | 2.271809 / 6.876477 (-4.604667) | 2.670327 / 2.142072 (0.528254) | 0.943688 / 4.805227 (-3.861539) | 0.191677 / 6.500664 (-6.308988) | 0.066008 / 0.075469 (-0.009461) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.400139 / 1.841788 (-0.441648) | 21.896198 / 8.074308 (13.821890) | 17.853604 / 10.191392 (7.662212) | 0.226603 / 0.680424 (-0.453821) | 0.026682 / 0.534201 (-0.507518) | 0.460131 / 0.579283 (-0.119152) | 0.536790 / 0.434364 (0.102427) | 0.492913 / 0.540337 (-0.047424) | 0.724290 / 1.386936 (-0.662646) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007795 / 0.011353 (-0.003557) | 0.009045 / 0.011008 (-0.001963) | 0.085480 / 0.038508 (0.046972) | 0.071881 / 0.023109 (0.048772) | 0.514520 / 0.275898 (0.238622) | 0.569762 / 0.323480 (0.246282) | 0.006126 / 0.007986 (-0.001859) | 0.004153 / 0.004328 (-0.000175) | 0.072150 / 0.004250 (0.067900) | 0.056511 / 0.037052 (0.019458) | 0.484097 / 0.258489 (0.225607) | 0.532673 / 0.293841 (0.238832) | 0.040974 / 0.128546 (-0.087572) | 0.012071 / 0.075646 (-0.063575) | 0.102608 / 0.419271 (-0.316663) | 0.052893 / 0.043533 (0.009360) | 0.485832 / 0.255139 (0.230693) | 0.530479 / 0.283200 (0.247280) | 0.031556 / 0.141683 (-0.110127) | 1.737508 / 1.452155 (0.285354) | 1.834637 / 1.492716 (0.341921) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.423314 / 0.018006 (0.405308) | 0.614163 / 0.000490 (0.613673) | 0.052784 / 0.000200 (0.052584) | 0.000206 / 0.000054 (0.000151) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031728 / 0.037411 (-0.005684) | 0.088048 / 0.014526 (0.073522) | 0.105759 / 0.176557 (-0.070798) | 0.181433 / 0.737135 (-0.555703) | 0.103133 / 0.296338 (-0.193205) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.659710 / 0.215209 (0.444501) | 5.876378 / 2.077655 (3.798723) | 2.899444 / 1.504120 (1.395324) | 2.871592 / 1.541195 (1.330397) | 2.861205 / 1.468490 (1.392715) | 0.879452 / 4.584777 (-3.705325) | 5.395988 / 3.745712 (1.650275) | 4.548359 / 5.269862 (-0.721502) | 2.946601 / 4.565676 (-1.619076) | 0.099832 / 0.424275 (-0.324443) | 0.008958 / 0.007607 (0.001351) | 0.778480 / 0.226044 (0.552435) | 7.672282 / 2.268929 (5.403354) | 3.963701 / 55.444624 (-51.480923) | 3.154950 / 6.876477 (-3.721527) | 3.351070 / 2.142072 (1.208997) | 1.059459 / 4.805227 (-3.745768) | 0.212035 / 6.500664 (-6.288629) | 0.076941 / 0.075469 (0.001472) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.639813 / 1.841788 (-0.201975) | 24.807517 / 8.074308 (16.733208) | 20.662500 / 10.191392 (10.471108) | 0.244486 / 0.680424 (-0.435937) | 0.032335 / 0.534201 (-0.501866) | 0.470896 / 0.579283 (-0.108387) | 0.581561 / 0.434364 (0.147197) | 0.495158 / 0.540337 (-0.045179) | 0.788350 / 1.386936 (-0.598586) |\n\n</details>\n</details>\n\n\n"
] | 2023-09-08T16:09:13Z
| 2023-09-08T18:02:49Z
| 2023-09-08T17:52:16Z
|
COLLABORATOR
| null | null | null |
Fix #6225
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6228/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6228/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6228.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6228",
"merged_at": "2023-09-08T17:52:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6228.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6228"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7084
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7084/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7084/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7084/events
|
https://github.com/huggingface/datasets/issues/7084
| 2,439,519,534
|
I_kwDODunzps6RaB0u
| 7,084
|
More easily support streaming local files
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/23191892?v=4",
"events_url": "https://api.github.com/users/fschlatt/events{/privacy}",
"followers_url": "https://api.github.com/users/fschlatt/followers",
"following_url": "https://api.github.com/users/fschlatt/following{/other_user}",
"gists_url": "https://api.github.com/users/fschlatt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fschlatt",
"id": 23191892,
"login": "fschlatt",
"node_id": "MDQ6VXNlcjIzMTkxODky",
"organizations_url": "https://api.github.com/users/fschlatt/orgs",
"received_events_url": "https://api.github.com/users/fschlatt/received_events",
"repos_url": "https://api.github.com/users/fschlatt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fschlatt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fschlatt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fschlatt",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[] | 2024-07-31T09:03:15Z
| 2024-07-31T09:05:58Z
| null |
CONTRIBUTOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
Simplify downloading and streaming datasets locally. Specifically, perhaps add an option to `load_dataset(..., streaming="download_first")` or add better support for streaming symlinked or arrow files.
### Motivation
I have downloaded FineWeb-edu locally and currently trying to stream the dataset from the local files. I have both the raw parquet files using `hugginface-cli download --repo-type dataset HuggingFaceFW/fineweb-edu` and the processed arrow files using `load_dataset("HuggingFaceFW/fineweb-edu")`.
Streaming the files locally does not work well for both file types for two different reasons.
**Arrow files**
When running `load_dataset("arrow", data_files={"train": "~/.cache/huggingface/datasets/HuggingFaceFW___fineweb-edu/default/0.0.0/5b89d1ea9319fe101b3cbdacd89a903aca1d6052/fineweb-edu-train-*.arrow"})` resolving the data files is fast, but because `arrow` is not included in the known [extensions file list](https://github.com/huggingface/datasets/blob/ce4a0c573920607bc6c814605734091b06b860e7/src/datasets/utils/file_utils.py#L738) , all files are opened and scanned to determine the compression type. Adding `arrow` to the known extension types resolves this issue.
**Parquet files**
When running `load_dataset("arrow", data_files={"train": "~/.cache/huggingface/hub/dataset-HuggingFaceFW___fineweb-edu/snapshots/5b89d1ea9319fe101b3cbdacd89a903aca1d6052/data/CC-MAIN-*/train-*.parquet"})` the paths do not get resolved because the parquet files are symlinked from the blobs (which contain all files in case there are different versions). This occurs because the [pattern matching](https://github.com/huggingface/datasets/blob/ce4a0c573920607bc6c814605734091b06b860e7/src/datasets/data_files.py#L389) checks if the path is a file and does not check for symlinks. Symlinks (at least on my machine) are of type "other".
### Your contribution
I have created a PR for fixing arrow file streaming and symlinks. However, I have not checked locally if the tests work or new tests need to be added.
IMO, the easiest option would be to add a `streaming=download_first` option, but I'm afraid that exceeds my current knowledge of how the datasets library works. https://github.com/huggingface/datasets/pull/7083
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7084/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7084/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5496
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5496/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5496/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5496/events
|
https://github.com/huggingface/datasets/issues/5496
| 1,567,301,765
|
I_kwDODunzps5dayCF
| 5,496
|
Add a `reduce` method
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/59542043?v=4",
"events_url": "https://api.github.com/users/zhangir-azerbayev/events{/privacy}",
"followers_url": "https://api.github.com/users/zhangir-azerbayev/followers",
"following_url": "https://api.github.com/users/zhangir-azerbayev/following{/other_user}",
"gists_url": "https://api.github.com/users/zhangir-azerbayev/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zhangir-azerbayev",
"id": 59542043,
"login": "zhangir-azerbayev",
"node_id": "MDQ6VXNlcjU5NTQyMDQz",
"organizations_url": "https://api.github.com/users/zhangir-azerbayev/orgs",
"received_events_url": "https://api.github.com/users/zhangir-azerbayev/received_events",
"repos_url": "https://api.github.com/users/zhangir-azerbayev/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zhangir-azerbayev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhangir-azerbayev/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zhangir-azerbayev",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] | null |
[
"Hi! Sure, feel free to open a PR, so we can see the API you have in mind.",
"I would like to give it a go! #self-assign",
"Closing as `Dataset.map` can be used instead (see https://github.com/huggingface/datasets/pull/5533#issuecomment-1440571658 and https://github.com/huggingface/datasets/pull/5533#issuecomment-1446403263)",
"Hello, is it possible for this issue/PR to be revisited? The problem with the alternatives presented (besides multiple map stages) is that they don't use the cache. A reduce operation is just as expensive as a map operation because it also goes over the entire dataset. It's equally worth caching.\r\n\r\nPersonally, I have a situation where I would need this and map is far from ideal. I'm working on updating a project of mine to use Huggingface Datasets, and I need to port the loop at https://github.com/colonelwatch/abstracts-search/blob/b90f31ee4cc6e394f829d3a6d9d0311ca390ada9/train.py#L112-L138. Please forgive the code style, here's what it does in English. I have a dataset of about 95 million embeddings, out of which 16384 is taken as a \"query\" set. For each embedding in the query set, I need to find the ten closest neighbors. These nearest neighbors are used to tune the parameters of a faiss index. The solution is to set up an \"accumulator\" comprising of the ten closest so far and their distances, then do a single scan over the 95 million (memmapped), then save the results of the \"accumulator\" for when I want to prototype another index.\r\n\r\nThe closest approximation to this is multiple map stages, but with such a large \"accumulator\" having the RAM to do a big batch size becomes critical. At a batch size of 1000, the intermediate accumulators would in theory be about 120 GB! That can be more if I want higher precision than float32. It would already be about the same size as the original embeddings. Using larger batch sizes puts strain on the RAM because I'd be dealing with batch_size x 16384 distances. The best I'd gotten with my RAM, single-threaded, was 65536, and for speed I had to use that thread to feed a GPU. It'd be better if I could use multiple threads to get high throughput instead, or even do all the work in CPU, but to fit the threads I'd need the batch size to be smaller.\r\n\r\nAll of this intermediate memory could be eliminated if there was a reduce operation."
] | 2023-02-02T04:30:22Z
| 2024-11-12T05:58:14Z
| 2023-07-21T14:24:32Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
Right now the `Dataset` class implements `map()` and `filter()`, but leaves out the third functional idiom popular among Python users: `reduce`.
### Motivation
A `reduce` method is often useful when calculating dataset statistics, for example, the occurrence of a particular n-gram or the average line length of a code dataset.
### Your contribution
I haven't contributed to `datasets` before, but I don't expect this will be too difficult, since the implementation will closely follow that of `map` and `filter`. I could have a crack over the weekend.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5496/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5496/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5863
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5863/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5863/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5863/events
|
https://github.com/huggingface/datasets/pull/5863
| 1,710,335,905
|
PR_kwDODunzps5QhtlM
| 5,863
|
Use a new low-memory approach for tf dataset index shuffling
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Rocketknight1",
"id": 12866554,
"login": "Rocketknight1",
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Rocketknight1",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5863). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007764 / 0.011353 (-0.003588) | 0.005397 / 0.011008 (-0.005611) | 0.097995 / 0.038508 (0.059487) | 0.036360 / 0.023109 (0.013251) | 0.312148 / 0.275898 (0.036250) | 0.349427 / 0.323480 (0.025947) | 0.006635 / 0.007986 (-0.001350) | 0.004373 / 0.004328 (0.000044) | 0.074350 / 0.004250 (0.070099) | 0.054667 / 0.037052 (0.017614) | 0.301621 / 0.258489 (0.043132) | 0.364233 / 0.293841 (0.070392) | 0.035356 / 0.128546 (-0.093191) | 0.012512 / 0.075646 (-0.063134) | 0.333399 / 0.419271 (-0.085873) | 0.051363 / 0.043533 (0.007830) | 0.302372 / 0.255139 (0.047233) | 0.326542 / 0.283200 (0.043343) | 0.118610 / 0.141683 (-0.023073) | 1.438485 / 1.452155 (-0.013669) | 1.539131 / 1.492716 (0.046415) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.010920 / 0.018006 (-0.007086) | 0.561263 / 0.000490 (0.560773) | 0.003972 / 0.000200 (0.003772) | 0.000096 / 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030333 / 0.037411 (-0.007078) | 0.113608 / 0.014526 (0.099083) | 0.125802 / 0.176557 (-0.050755) | 0.183885 / 0.737135 (-0.553250) | 0.130242 / 0.296338 (-0.166097) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.404147 / 0.215209 (0.188938) | 4.021990 / 2.077655 (1.944335) | 1.821450 / 1.504120 (0.317330) | 1.619032 / 1.541195 (0.077837) | 1.791267 / 1.468490 (0.322777) | 0.706683 / 4.584777 (-3.878094) | 3.819056 / 3.745712 (0.073344) | 3.485714 / 5.269862 (-1.784147) | 1.938968 / 4.565676 (-2.626709) | 0.086501 / 0.424275 (-0.337774) | 0.012300 / 0.007607 (0.004693) | 0.503600 / 0.226044 (0.277555) | 5.042123 / 2.268929 (2.773195) | 2.269712 / 55.444624 (-53.174912) | 1.944912 / 6.876477 (-4.931565) | 2.155196 / 2.142072 (0.013123) | 0.853434 / 4.805227 (-3.951793) | 0.175554 / 6.500664 (-6.325110) | 0.072005 / 0.075469 (-0.003464) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.203765 / 1.841788 (-0.638022) | 15.836634 / 8.074308 (7.762326) | 15.707348 / 10.191392 (5.515956) | 0.164828 / 0.680424 (-0.515596) | 0.018115 / 0.534201 (-0.516086) | 0.434591 / 0.579283 (-0.144692) | 0.437858 / 0.434364 (0.003495) | 0.524672 / 0.540337 (-0.015665) | 0.610535 / 1.386936 (-0.776401) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007558 / 0.011353 (-0.003795) | 0.005258 / 0.011008 (-0.005750) | 0.075263 / 0.038508 (0.036755) | 0.033915 / 0.023109 (0.010805) | 0.371368 / 0.275898 (0.095470) | 0.399239 / 0.323480 (0.075760) | 0.006547 / 0.007986 (-0.001439) | 0.004675 / 0.004328 (0.000347) | 0.074230 / 0.004250 (0.069980) | 0.054653 / 0.037052 (0.017601) | 0.376655 / 0.258489 (0.118166) | 0.438437 / 0.293841 (0.144596) | 0.035838 / 0.128546 (-0.092709) | 0.012641 / 0.075646 (-0.063005) | 0.087279 / 0.419271 (-0.331993) | 0.046311 / 0.043533 (0.002778) | 0.356649 / 0.255139 (0.101510) | 0.377876 / 0.283200 (0.094677) | 0.108097 / 0.141683 (-0.033586) | 1.478461 / 1.452155 (0.026306) | 1.560375 / 1.492716 (0.067658) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.316384 / 0.018006 (0.298378) | 0.539382 / 0.000490 (0.538892) | 0.002029 / 0.000200 (0.001829) | 0.000090 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029950 / 0.037411 (-0.007462) | 0.111371 / 0.014526 (0.096846) | 0.125254 / 0.176557 (-0.051303) | 0.173064 / 0.737135 (-0.564071) | 0.130446 / 0.296338 (-0.165893) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.424882 / 0.215209 (0.209673) | 4.241575 / 2.077655 (2.163920) | 2.096216 / 1.504120 (0.592096) | 1.916017 / 1.541195 (0.374823) | 2.016318 / 1.468490 (0.547828) | 0.701197 / 4.584777 (-3.883580) | 3.762365 / 3.745712 (0.016652) | 3.307805 / 5.269862 (-1.962057) | 1.841752 / 4.565676 (-2.723925) | 0.086003 / 0.424275 (-0.338272) | 0.012247 / 0.007607 (0.004640) | 0.532926 / 0.226044 (0.306882) | 5.370509 / 2.268929 (3.101580) | 2.587853 / 55.444624 (-52.856772) | 2.264541 / 6.876477 (-4.611936) | 2.374833 / 2.142072 (0.232760) | 0.827751 / 4.805227 (-3.977476) | 0.169454 / 6.500664 (-6.331210) | 0.066340 / 0.075469 (-0.009129) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.319128 / 1.841788 (-0.522660) | 16.702085 / 8.074308 (8.627777) | 13.559957 / 10.191392 (3.368565) | 0.146659 / 0.680424 (-0.533765) | 0.017384 / 0.534201 (-0.516817) | 0.421126 / 0.579283 (-0.158157) | 0.422067 / 0.434364 (-0.012297) | 0.490615 / 0.540337 (-0.049723) | 0.587151 / 1.386936 (-0.799785) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006604 / 0.011353 (-0.004749) | 0.004508 / 0.011008 (-0.006500) | 0.098652 / 0.038508 (0.060144) | 0.028172 / 0.023109 (0.005063) | 0.366997 / 0.275898 (0.091099) | 0.403691 / 0.323480 (0.080211) | 0.005127 / 0.007986 (-0.002859) | 0.003340 / 0.004328 (-0.000989) | 0.075408 / 0.004250 (0.071157) | 0.038049 / 0.037052 (0.000996) | 0.367914 / 0.258489 (0.109425) | 0.410958 / 0.293841 (0.117118) | 0.030454 / 0.128546 (-0.098093) | 0.011422 / 0.075646 (-0.064224) | 0.325048 / 0.419271 (-0.094223) | 0.042959 / 0.043533 (-0.000574) | 0.374536 / 0.255139 (0.119397) | 0.394738 / 0.283200 (0.111538) | 0.090481 / 0.141683 (-0.051201) | 1.504858 / 1.452155 (0.052703) | 1.569072 / 1.492716 (0.076356) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.010062 / 0.018006 (-0.007945) | 0.408619 / 0.000490 (0.408130) | 0.002307 / 0.000200 (0.002107) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022898 / 0.037411 (-0.014514) | 0.096975 / 0.014526 (0.082449) | 0.103032 / 0.176557 (-0.073524) | 0.164877 / 0.737135 (-0.572259) | 0.107324 / 0.296338 (-0.189014) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.446652 / 0.215209 (0.231442) | 4.466939 / 2.077655 (2.389285) | 2.204590 / 1.504120 (0.700471) | 2.004048 / 1.541195 (0.462853) | 2.053035 / 1.468490 (0.584545) | 0.696617 / 4.584777 (-3.888160) | 3.391173 / 3.745712 (-0.354539) | 1.863306 / 5.269862 (-3.406556) | 1.160637 / 4.565676 (-3.405039) | 0.083115 / 0.424275 (-0.341160) | 0.012470 / 0.007607 (0.004862) | 0.547207 / 0.226044 (0.321163) | 5.500667 / 2.268929 (3.231739) | 2.656615 / 55.444624 (-52.788009) | 2.313281 / 6.876477 (-4.563195) | 2.395632 / 2.142072 (0.253559) | 0.815361 / 4.805227 (-3.989867) | 0.152112 / 6.500664 (-6.348552) | 0.067485 / 0.075469 (-0.007984) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.206975 / 1.841788 (-0.634813) | 13.684136 / 8.074308 (5.609828) | 13.919129 / 10.191392 (3.727737) | 0.140767 / 0.680424 (-0.539657) | 0.016445 / 0.534201 (-0.517756) | 0.379136 / 0.579283 (-0.200147) | 0.385395 / 0.434364 (-0.048969) | 0.445781 / 0.540337 (-0.094556) | 0.522056 / 1.386936 (-0.864880) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006370 / 0.011353 (-0.004983) | 0.004514 / 0.011008 (-0.006495) | 0.075671 / 0.038508 (0.037163) | 0.026723 / 0.023109 (0.003614) | 0.359819 / 0.275898 (0.083921) | 0.387935 / 0.323480 (0.064456) | 0.004888 / 0.007986 (-0.003098) | 0.004619 / 0.004328 (0.000290) | 0.075546 / 0.004250 (0.071295) | 0.039024 / 0.037052 (0.001971) | 0.361173 / 0.258489 (0.102684) | 0.411425 / 0.293841 (0.117584) | 0.030842 / 0.128546 (-0.097705) | 0.011555 / 0.075646 (-0.064091) | 0.084697 / 0.419271 (-0.334574) | 0.039281 / 0.043533 (-0.004252) | 0.370082 / 0.255139 (0.114943) | 0.382113 / 0.283200 (0.098913) | 0.091237 / 0.141683 (-0.050445) | 1.534185 / 1.452155 (0.082030) | 1.576488 / 1.492716 (0.083772) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.226568 / 0.018006 (0.208562) | 0.401566 / 0.000490 (0.401076) | 0.002915 / 0.000200 (0.002715) | 0.000076 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025357 / 0.037411 (-0.012054) | 0.099747 / 0.014526 (0.085221) | 0.106443 / 0.176557 (-0.070113) | 0.157147 / 0.737135 (-0.579989) | 0.110759 / 0.296338 (-0.185580) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.444648 / 0.215209 (0.229439) | 4.437930 / 2.077655 (2.360275) | 2.154033 / 1.504120 (0.649913) | 1.958351 / 1.541195 (0.417157) | 1.991031 / 1.468490 (0.522541) | 0.691440 / 4.584777 (-3.893337) | 3.369087 / 3.745712 (-0.376625) | 1.847103 / 5.269862 (-3.422758) | 1.152509 / 4.565676 (-3.413168) | 0.082519 / 0.424275 (-0.341756) | 0.012609 / 0.007607 (0.005001) | 0.547267 / 0.226044 (0.321222) | 5.501335 / 2.268929 (3.232407) | 2.621079 / 55.444624 (-52.823545) | 2.281332 / 6.876477 (-4.595145) | 2.300427 / 2.142072 (0.158354) | 0.803611 / 4.805227 (-4.001616) | 0.151784 / 6.500664 (-6.348880) | 0.067801 / 0.075469 (-0.007669) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.343201 / 1.841788 (-0.498587) | 13.901033 / 8.074308 (5.826725) | 13.114738 / 10.191392 (2.923346) | 0.149358 / 0.680424 (-0.531066) | 0.016596 / 0.534201 (-0.517605) | 0.377310 / 0.579283 (-0.201973) | 0.387045 / 0.434364 (-0.047319) | 0.441272 / 0.540337 (-0.099065) | 0.525783 / 1.386936 (-0.861153) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008147 / 0.011353 (-0.003205) | 0.005531 / 0.011008 (-0.005477) | 0.099796 / 0.038508 (0.061288) | 0.041574 / 0.023109 (0.018465) | 0.315752 / 0.275898 (0.039854) | 0.369846 / 0.323480 (0.046366) | 0.006489 / 0.007986 (-0.001497) | 0.004339 / 0.004328 (0.000010) | 0.074769 / 0.004250 (0.070519) | 0.051313 / 0.037052 (0.014261) | 0.313463 / 0.258489 (0.054974) | 0.369918 / 0.293841 (0.076077) | 0.035893 / 0.128546 (-0.092653) | 0.012487 / 0.075646 (-0.063159) | 0.336464 / 0.419271 (-0.082807) | 0.052870 / 0.043533 (0.009337) | 0.310795 / 0.255139 (0.055656) | 0.333146 / 0.283200 (0.049946) | 0.112813 / 0.141683 (-0.028870) | 1.488192 / 1.452155 (0.036038) | 1.563438 / 1.492716 (0.070721) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.015015 / 0.018006 (-0.002991) | 0.531783 / 0.000490 (0.531294) | 0.005039 / 0.000200 (0.004839) | 0.000103 / 0.000054 (0.000049) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030205 / 0.037411 (-0.007207) | 0.115997 / 0.014526 (0.101471) | 0.122958 / 0.176557 (-0.053599) | 0.186956 / 0.737135 (-0.550180) | 0.130268 / 0.296338 (-0.166071) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.402648 / 0.215209 (0.187439) | 3.996121 / 2.077655 (1.918466) | 1.811715 / 1.504120 (0.307595) | 1.640805 / 1.541195 (0.099610) | 1.810478 / 1.468490 (0.341988) | 0.699996 / 4.584777 (-3.884781) | 3.834020 / 3.745712 (0.088308) | 3.688364 / 5.269862 (-1.581498) | 1.973828 / 4.565676 (-2.591849) | 0.087085 / 0.424275 (-0.337190) | 0.012501 / 0.007607 (0.004894) | 0.498934 / 0.226044 (0.272889) | 4.977608 / 2.268929 (2.708680) | 2.258678 / 55.444624 (-53.185947) | 1.934251 / 6.876477 (-4.942226) | 2.177409 / 2.142072 (0.035337) | 0.873470 / 4.805227 (-3.931757) | 0.173132 / 6.500664 (-6.327532) | 0.069144 / 0.075469 (-0.006325) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.181554 / 1.841788 (-0.660234) | 15.694468 / 8.074308 (7.620160) | 15.026954 / 10.191392 (4.835562) | 0.167092 / 0.680424 (-0.513332) | 0.017921 / 0.534201 (-0.516280) | 0.425649 / 0.579283 (-0.153634) | 0.423225 / 0.434364 (-0.011139) | 0.522132 / 0.540337 (-0.018205) | 0.612806 / 1.386936 (-0.774130) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007896 / 0.011353 (-0.003457) | 0.005581 / 0.011008 (-0.005427) | 0.076338 / 0.038508 (0.037830) | 0.037064 / 0.023109 (0.013954) | 0.399706 / 0.275898 (0.123808) | 0.431698 / 0.323480 (0.108218) | 0.006846 / 0.007986 (-0.001140) | 0.006010 / 0.004328 (0.001682) | 0.075771 / 0.004250 (0.071520) | 0.058214 / 0.037052 (0.021161) | 0.395753 / 0.258489 (0.137264) | 0.459925 / 0.293841 (0.166084) | 0.036349 / 0.128546 (-0.092197) | 0.012720 / 0.075646 (-0.062926) | 0.087248 / 0.419271 (-0.332024) | 0.049405 / 0.043533 (0.005872) | 0.387576 / 0.255139 (0.132437) | 0.409861 / 0.283200 (0.126661) | 0.111639 / 0.141683 (-0.030043) | 1.482840 / 1.452155 (0.030685) | 1.574465 / 1.492716 (0.081749) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.320628 / 0.018006 (0.302622) | 0.556338 / 0.000490 (0.555848) | 0.000445 / 0.000200 (0.000245) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032905 / 0.037411 (-0.004507) | 0.121253 / 0.014526 (0.106727) | 0.127241 / 0.176557 (-0.049316) | 0.178090 / 0.737135 (-0.559045) | 0.143285 / 0.296338 (-0.153054) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437852 / 0.215209 (0.222643) | 4.369770 / 2.077655 (2.292115) | 2.219932 / 1.504120 (0.715812) | 2.032520 / 1.541195 (0.491325) | 2.154300 / 1.468490 (0.685810) | 0.678942 / 4.584777 (-3.905835) | 3.768148 / 3.745712 (0.022436) | 2.152738 / 5.269862 (-3.117124) | 1.341480 / 4.565676 (-3.224197) | 0.084326 / 0.424275 (-0.339949) | 0.012288 / 0.007607 (0.004681) | 0.547677 / 0.226044 (0.321633) | 5.496777 / 2.268929 (3.227848) | 2.702267 / 55.444624 (-52.742357) | 2.388580 / 6.876477 (-4.487897) | 2.471673 / 2.142072 (0.329601) | 0.833645 / 4.805227 (-3.971582) | 0.167113 / 6.500664 (-6.333551) | 0.067658 / 0.075469 (-0.007811) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.282050 / 1.841788 (-0.559737) | 16.413677 / 8.074308 (8.339369) | 14.080910 / 10.191392 (3.889518) | 0.171782 / 0.680424 (-0.508642) | 0.018186 / 0.534201 (-0.516015) | 0.425244 / 0.579283 (-0.154039) | 0.430260 / 0.434364 (-0.004104) | 0.500838 / 0.540337 (-0.039499) | 0.591900 / 1.386936 (-0.795036) |\n\n</details>\n</details>\n\n\n",
"The approach we take here is to no longer materialize the entire index array or shuffle buffer. Instead, we do the following:\r\n\r\n1) Generate a dataset with `tf.data.Dataset.range`. This dataset is not materialized - it's basically a range iterator.\r\n2) When we begin iterating over a dataset, generate a random seed. This value is constant for each pass over the dataset, and is regenerated if we start a new iteration or epoch over the dataset.\r\n3) Map the range dataset and the random seed with `tf.random.index_shuffle`. This converts indices into the equivalent values in a permuted array. In other words `tf.random.index_shuffle(indices, maxval=50_000_000)` is equivalent to `np.random.permutation(50_000_000)[indices]`, but without ever materializing the `np.random.permutation(50_000_000)` array.\r\n\r\nUsing this approach gives us a complete iteration over the dataset that does not skip any samples, compiles in TF and also never materializes the complete index array, which should avoid the memory usage issues. I'm testing that now!",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008395 / 0.011353 (-0.002958) | 0.005893 / 0.011008 (-0.005115) | 0.117081 / 0.038508 (0.078573) | 0.040987 / 0.023109 (0.017878) | 0.394234 / 0.275898 (0.118336) | 0.447036 / 0.323480 (0.123556) | 0.006703 / 0.007986 (-0.001283) | 0.006085 / 0.004328 (0.001757) | 0.086479 / 0.004250 (0.082228) | 0.050192 / 0.037052 (0.013140) | 0.400958 / 0.258489 (0.142469) | 0.455551 / 0.293841 (0.161710) | 0.041481 / 0.128546 (-0.087065) | 0.014135 / 0.075646 (-0.061511) | 0.399929 / 0.419271 (-0.019343) | 0.060824 / 0.043533 (0.017291) | 0.395946 / 0.255139 (0.140807) | 0.428811 / 0.283200 (0.145611) | 0.120057 / 0.141683 (-0.021626) | 1.703244 / 1.452155 (0.251090) | 1.841153 / 1.492716 (0.348436) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.021826 / 0.018006 (0.003820) | 0.494279 / 0.000490 (0.493789) | 0.011258 / 0.000200 (0.011058) | 0.000382 / 0.000054 (0.000328) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031651 / 0.037411 (-0.005760) | 0.132871 / 0.014526 (0.118345) | 0.137388 / 0.176557 (-0.039169) | 0.205808 / 0.737135 (-0.531327) | 0.147585 / 0.296338 (-0.148753) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.474483 / 0.215209 (0.259274) | 4.726568 / 2.077655 (2.648914) | 2.136172 / 1.504120 (0.632052) | 1.918364 / 1.541195 (0.377169) | 2.068794 / 1.468490 (0.600304) | 0.836481 / 4.584777 (-3.748296) | 4.550583 / 3.745712 (0.804871) | 2.456287 / 5.269862 (-2.813574) | 1.563127 / 4.565676 (-3.002550) | 0.102541 / 0.424275 (-0.321734) | 0.014492 / 0.007607 (0.006885) | 0.598572 / 0.226044 (0.372528) | 5.953321 / 2.268929 (3.684392) | 2.695210 / 55.444624 (-52.749414) | 2.294317 / 6.876477 (-4.582160) | 2.456585 / 2.142072 (0.314513) | 1.019907 / 4.805227 (-3.785320) | 0.201225 / 6.500664 (-6.299439) | 0.077113 / 0.075469 (0.001644) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.497662 / 1.841788 (-0.344126) | 18.216941 / 8.074308 (10.142633) | 17.016638 / 10.191392 (6.825246) | 0.193271 / 0.680424 (-0.487153) | 0.020440 / 0.534201 (-0.513761) | 0.509361 / 0.579283 (-0.069922) | 0.513389 / 0.434364 (0.079025) | 0.622266 / 0.540337 (0.081928) | 0.741733 / 1.386936 (-0.645203) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008641 / 0.011353 (-0.002712) | 0.005792 / 0.011008 (-0.005216) | 0.086020 / 0.038508 (0.047512) | 0.040005 / 0.023109 (0.016896) | 0.435120 / 0.275898 (0.159222) | 0.480269 / 0.323480 (0.156789) | 0.006669 / 0.007986 (-0.001317) | 0.006039 / 0.004328 (0.001711) | 0.083468 / 0.004250 (0.079218) | 0.057700 / 0.037052 (0.020648) | 0.416418 / 0.258489 (0.157929) | 0.508286 / 0.293841 (0.214445) | 0.041198 / 0.128546 (-0.087349) | 0.014346 / 0.075646 (-0.061301) | 0.100553 / 0.419271 (-0.318718) | 0.054201 / 0.043533 (0.010668) | 0.438232 / 0.255139 (0.183093) | 0.454707 / 0.283200 (0.171508) | 0.118332 / 0.141683 (-0.023351) | 1.657607 / 1.452155 (0.205452) | 1.825510 / 1.492716 (0.332794) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.236156 / 0.018006 (0.218150) | 0.487612 / 0.000490 (0.487123) | 0.005747 / 0.000200 (0.005547) | 0.000111 / 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035127 / 0.037411 (-0.002284) | 0.132013 / 0.014526 (0.117487) | 0.142316 / 0.176557 (-0.034241) | 0.198627 / 0.737135 (-0.538508) | 0.145454 / 0.296338 (-0.150885) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.513041 / 0.215209 (0.297832) | 5.066197 / 2.077655 (2.988542) | 2.508779 / 1.504120 (1.004659) | 2.273901 / 1.541195 (0.732706) | 2.364958 / 1.468490 (0.896468) | 0.811367 / 4.584777 (-3.773410) | 4.504744 / 3.745712 (0.759032) | 2.499811 / 5.269862 (-2.770050) | 1.583349 / 4.565676 (-2.982328) | 0.101701 / 0.424275 (-0.322574) | 0.014379 / 0.007607 (0.006772) | 0.669506 / 0.226044 (0.443462) | 6.556702 / 2.268929 (4.287774) | 3.123457 / 55.444624 (-52.321167) | 2.731997 / 6.876477 (-4.144480) | 2.862866 / 2.142072 (0.720794) | 0.992956 / 4.805227 (-3.812271) | 0.200473 / 6.500664 (-6.300191) | 0.078780 / 0.075469 (0.003311) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.540718 / 1.841788 (-0.301070) | 18.749344 / 8.074308 (10.675036) | 15.648983 / 10.191392 (5.457591) | 0.174089 / 0.680424 (-0.506335) | 0.020441 / 0.534201 (-0.513760) | 0.503742 / 0.579283 (-0.075541) | 0.500648 / 0.434364 (0.066284) | 0.598558 / 0.540337 (0.058221) | 0.712093 / 1.386936 (-0.674843) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009940 / 0.011353 (-0.001412) | 0.006193 / 0.011008 (-0.004815) | 0.125874 / 0.038508 (0.087366) | 0.038664 / 0.023109 (0.015555) | 0.380013 / 0.275898 (0.104115) | 0.430152 / 0.323480 (0.106672) | 0.006961 / 0.007986 (-0.001025) | 0.004749 / 0.004328 (0.000420) | 0.099743 / 0.004250 (0.095492) | 0.052349 / 0.037052 (0.015297) | 0.433354 / 0.258489 (0.174865) | 0.436273 / 0.293841 (0.142433) | 0.053929 / 0.128546 (-0.074617) | 0.019369 / 0.075646 (-0.056278) | 0.421783 / 0.419271 (0.002511) | 0.062746 / 0.043533 (0.019213) | 0.377225 / 0.255139 (0.122086) | 0.413708 / 0.283200 (0.130508) | 0.111371 / 0.141683 (-0.030312) | 1.819166 / 1.452155 (0.367011) | 1.974527 / 1.492716 (0.481810) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090664 / 0.018006 (0.072658) | 0.566166 / 0.000490 (0.565676) | 0.079305 / 0.000200 (0.079105) | 0.000755 / 0.000054 (0.000700) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029720 / 0.037411 (-0.007691) | 0.126030 / 0.014526 (0.111504) | 0.146020 / 0.176557 (-0.030537) | 0.210354 / 0.737135 (-0.526781) | 0.149428 / 0.296338 (-0.146910) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.624371 / 0.215209 (0.409162) | 6.332839 / 2.077655 (4.255184) | 2.547784 / 1.504120 (1.043664) | 2.150508 / 1.541195 (0.609313) | 2.240816 / 1.468490 (0.772326) | 1.271131 / 4.584777 (-3.313646) | 5.642726 / 3.745712 (1.897014) | 3.212988 / 5.269862 (-2.056874) | 2.258123 / 4.565676 (-2.307553) | 0.149477 / 0.424275 (-0.274798) | 0.014603 / 0.007607 (0.006996) | 0.782155 / 0.226044 (0.556111) | 7.855191 / 2.268929 (5.586262) | 3.308638 / 55.444624 (-52.135986) | 2.548142 / 6.876477 (-4.328335) | 2.627374 / 2.142072 (0.485301) | 1.515170 / 4.805227 (-3.290058) | 0.262479 / 6.500664 (-6.238185) | 0.082181 / 0.075469 (0.006712) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.573169 / 1.841788 (-0.268618) | 18.105719 / 8.074308 (10.031411) | 22.015179 / 10.191392 (11.823787) | 0.254678 / 0.680424 (-0.425746) | 0.027098 / 0.534201 (-0.507103) | 0.578045 / 0.579283 (-0.001238) | 0.647130 / 0.434364 (0.212766) | 0.650522 / 0.540337 (0.110185) | 0.797713 / 1.386936 (-0.589223) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010376 / 0.011353 (-0.000977) | 0.005990 / 0.011008 (-0.005018) | 0.097144 / 0.038508 (0.058635) | 0.038205 / 0.023109 (0.015096) | 0.468347 / 0.275898 (0.192449) | 0.497646 / 0.323480 (0.174166) | 0.006916 / 0.007986 (-0.001069) | 0.004760 / 0.004328 (0.000431) | 0.109838 / 0.004250 (0.105587) | 0.048321 / 0.037052 (0.011269) | 0.437458 / 0.258489 (0.178969) | 0.534864 / 0.293841 (0.241023) | 0.053655 / 0.128546 (-0.074892) | 0.021915 / 0.075646 (-0.053732) | 0.121047 / 0.419271 (-0.298224) | 0.059694 / 0.043533 (0.016162) | 0.466937 / 0.255139 (0.211798) | 0.482030 / 0.283200 (0.198831) | 0.117458 / 0.141683 (-0.024225) | 1.835551 / 1.452155 (0.383396) | 1.965748 / 1.492716 (0.473031) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.234885 / 0.018006 (0.216879) | 0.529925 / 0.000490 (0.529436) | 0.000484 / 0.000200 (0.000284) | 0.000085 / 0.000054 (0.000031) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030959 / 0.037411 (-0.006453) | 0.128905 / 0.014526 (0.114379) | 0.136913 / 0.176557 (-0.039643) | 0.195133 / 0.737135 (-0.542002) | 0.147929 / 0.296338 (-0.148410) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.715661 / 0.215209 (0.500451) | 6.994125 / 2.077655 (4.916470) | 3.033178 / 1.504120 (1.529058) | 2.663709 / 1.541195 (1.122515) | 2.707558 / 1.468490 (1.239068) | 1.316195 / 4.584777 (-3.268582) | 5.688264 / 3.745712 (1.942552) | 3.260897 / 5.269862 (-2.008964) | 2.134985 / 4.565676 (-2.430691) | 0.153945 / 0.424275 (-0.270330) | 0.014727 / 0.007607 (0.007119) | 0.911339 / 0.226044 (0.685294) | 8.902640 / 2.268929 (6.633711) | 3.806606 / 55.444624 (-51.638018) | 3.052238 / 6.876477 (-3.824238) | 3.046945 / 2.142072 (0.904873) | 1.559837 / 4.805227 (-3.245390) | 0.272276 / 6.500664 (-6.228388) | 0.087728 / 0.075469 (0.012259) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.712691 / 1.841788 (-0.129097) | 18.127575 / 8.074308 (10.053267) | 19.734063 / 10.191392 (9.542671) | 0.235006 / 0.680424 (-0.445418) | 0.027581 / 0.534201 (-0.506620) | 0.551080 / 0.579283 (-0.028203) | 0.608564 / 0.434364 (0.174200) | 0.636578 / 0.540337 (0.096241) | 0.732374 / 1.386936 (-0.654562) |\n\n</details>\n</details>\n\n\n",
"Looks good in testing - this should be ready for review! cc @lhoestq @massquantity",
"Looks good to me, though i doubt that very few people will upgrade to TF >= 2.9 unless their memory is full:)",
"Is it more efficient than using numpy to shuffle as in multiprocessing ? Why not use the same strategy ?",
"Good question, honestly! The NumPy strategy works fine, but requires us to handle multiple processes instead of doing everything in `tf.data`. We could just scrap this entire code path and always use the multiprocessing NumPy approach, but I think single-threaded throughput would be lower if we did that. If you prefer it for code simplicity, though, I can do that.\r\n\r\nIn the longer term, I'm hoping that `tf.data` gets native support for our data structures and we can transition the whole pipeline to pure `tf.data`, but that still hasn't happened 🫠",
"And @massquantity TF 2.13 is going to release in a couple of days, so I hope most users are at least on TF 2.9 by now!",
"Unless there is a big gap in performance I think code simplicity would be appreciated ^^",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008638 / 0.011353 (-0.002715) | 0.006013 / 0.011008 (-0.004995) | 0.116456 / 0.038508 (0.077948) | 0.040419 / 0.023109 (0.017310) | 0.418374 / 0.275898 (0.142476) | 0.447693 / 0.323480 (0.124213) | 0.007002 / 0.007986 (-0.000984) | 0.006175 / 0.004328 (0.001847) | 0.087801 / 0.004250 (0.083550) | 0.051980 / 0.037052 (0.014928) | 0.393275 / 0.258489 (0.134786) | 0.449601 / 0.293841 (0.155760) | 0.041670 / 0.128546 (-0.086876) | 0.014396 / 0.075646 (-0.061251) | 0.399175 / 0.419271 (-0.020096) | 0.060635 / 0.043533 (0.017102) | 0.391449 / 0.255139 (0.136310) | 0.420713 / 0.283200 (0.137513) | 0.121369 / 0.141683 (-0.020314) | 1.692630 / 1.452155 (0.240475) | 1.815526 / 1.492716 (0.322810) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.244321 / 0.018006 (0.226315) | 0.487947 / 0.000490 (0.487458) | 0.004563 / 0.000200 (0.004363) | 0.000116 / 0.000054 (0.000061) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033425 / 0.037411 (-0.003987) | 0.134458 / 0.014526 (0.119932) | 0.138810 / 0.176557 (-0.037746) | 0.208871 / 0.737135 (-0.528264) | 0.147964 / 0.296338 (-0.148374) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.483347 / 0.215209 (0.268138) | 4.799550 / 2.077655 (2.721895) | 2.174149 / 1.504120 (0.670029) | 1.943276 / 1.541195 (0.402081) | 2.010884 / 1.468490 (0.542394) | 0.832030 / 4.584777 (-3.752747) | 4.716713 / 3.745712 (0.971001) | 4.615810 / 5.269862 (-0.654052) | 2.379600 / 4.565676 (-2.186077) | 0.103560 / 0.424275 (-0.320715) | 0.014683 / 0.007607 (0.007076) | 0.598558 / 0.226044 (0.372514) | 5.999126 / 2.268929 (3.730197) | 2.677819 / 55.444624 (-52.766805) | 2.320838 / 6.876477 (-4.555639) | 2.503684 / 2.142072 (0.361611) | 1.016459 / 4.805227 (-3.788769) | 0.201672 / 6.500664 (-6.298992) | 0.079310 / 0.075469 (0.003841) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.446374 / 1.841788 (-0.395413) | 19.219310 / 8.074308 (11.145002) | 17.294665 / 10.191392 (7.103273) | 0.246115 / 0.680424 (-0.434309) | 0.021406 / 0.534201 (-0.512795) | 0.524084 / 0.579283 (-0.055200) | 0.511254 / 0.434364 (0.076890) | 0.621304 / 0.540337 (0.080966) | 0.727088 / 1.386936 (-0.659848) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008907 / 0.011353 (-0.002446) | 0.006165 / 0.011008 (-0.004843) | 0.090786 / 0.038508 (0.052278) | 0.040893 / 0.023109 (0.017784) | 0.451252 / 0.275898 (0.175354) | 0.477811 / 0.323480 (0.154331) | 0.007418 / 0.007986 (-0.000568) | 0.005789 / 0.004328 (0.001461) | 0.087422 / 0.004250 (0.083171) | 0.061800 / 0.037052 (0.024748) | 0.459085 / 0.258489 (0.200596) | 0.488897 / 0.293841 (0.195056) | 0.048157 / 0.128546 (-0.080389) | 0.014676 / 0.075646 (-0.060970) | 0.104372 / 0.419271 (-0.314900) | 0.058066 / 0.043533 (0.014534) | 0.446131 / 0.255139 (0.190992) | 0.460428 / 0.283200 (0.177228) | 0.128492 / 0.141683 (-0.013191) | 1.811419 / 1.452155 (0.359265) | 1.894781 / 1.492716 (0.402064) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220527 / 0.018006 (0.202520) | 0.487663 / 0.000490 (0.487173) | 0.003864 / 0.000200 (0.003664) | 0.000162 / 0.000054 (0.000107) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036354 / 0.037411 (-0.001057) | 0.140469 / 0.014526 (0.125944) | 0.149990 / 0.176557 (-0.026566) | 0.212369 / 0.737135 (-0.524766) | 0.154000 / 0.296338 (-0.142338) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.514172 / 0.215209 (0.298963) | 5.129247 / 2.077655 (3.051593) | 2.536773 / 1.504120 (1.032653) | 2.317253 / 1.541195 (0.776058) | 2.424066 / 1.468490 (0.955576) | 0.836160 / 4.584777 (-3.748617) | 4.906235 / 3.745712 (1.160523) | 4.431395 / 5.269862 (-0.838467) | 2.332845 / 4.565676 (-2.232831) | 0.102867 / 0.424275 (-0.321409) | 0.014851 / 0.007607 (0.007244) | 0.644104 / 0.226044 (0.418060) | 6.415847 / 2.268929 (4.146918) | 3.186984 / 55.444624 (-52.257641) | 2.774125 / 6.876477 (-4.102352) | 2.848045 / 2.142072 (0.705972) | 1.018757 / 4.805227 (-3.786470) | 0.212333 / 6.500664 (-6.288331) | 0.079405 / 0.075469 (0.003936) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.748375 / 1.841788 (-0.093412) | 19.733829 / 8.074308 (11.659521) | 15.766665 / 10.191392 (5.575273) | 0.192087 / 0.680424 (-0.488337) | 0.027641 / 0.534201 (-0.506560) | 0.504101 / 0.579283 (-0.075182) | 0.493815 / 0.434364 (0.059451) | 0.583247 / 0.540337 (0.042910) | 0.697432 / 1.386936 (-0.689504) |\n\n</details>\n</details>\n\n\n",
"Hi @lhoestq, I tried moving everything to the NumPy path but ran into issues - the `SharedMemory` constructs it depends on were only added in Python 3.8. As a result, if we move everything to that path then `to_tf_dataset` does not work on older Python versions.\r\n\r\nFor now, how do you feel about reverting and using my original solution, which has fallbacks for all versions of Python and TensorFlow? Once our minimum versions pass Python 3.8 or TF 2.9 we can remove the older code paths.",
"Gentle ping on this question @lhoestq!",
"Ah yes indeed. Feel free to revert and add comments to explain why you needed to have a different approach for single process",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008395 / 0.011353 (-0.002958) | 0.005773 / 0.011008 (-0.005235) | 0.115702 / 0.038508 (0.077194) | 0.039897 / 0.023109 (0.016788) | 0.483140 / 0.275898 (0.207242) | 0.531288 / 0.323480 (0.207808) | 0.006739 / 0.007986 (-0.001246) | 0.004419 / 0.004328 (0.000090) | 0.086374 / 0.004250 (0.082124) | 0.056498 / 0.037052 (0.019446) | 0.491589 / 0.258489 (0.233100) | 0.556366 / 0.293841 (0.262525) | 0.041366 / 0.128546 (-0.087181) | 0.014373 / 0.075646 (-0.061274) | 0.395504 / 0.419271 (-0.023767) | 0.094382 / 0.043533 (0.050849) | 0.483000 / 0.255139 (0.227861) | 0.522693 / 0.283200 (0.239494) | 0.138804 / 0.141683 (-0.002879) | 1.719563 / 1.452155 (0.267409) | 1.853470 / 1.492716 (0.360753) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235616 / 0.018006 (0.217610) | 0.483267 / 0.000490 (0.482777) | 0.008663 / 0.000200 (0.008463) | 0.000401 / 0.000054 (0.000347) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033124 / 0.037411 (-0.004287) | 0.128821 / 0.014526 (0.114295) | 0.138910 / 0.176557 (-0.037647) | 0.213570 / 0.737135 (-0.523566) | 0.146646 / 0.296338 (-0.149693) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.479998 / 0.215209 (0.264789) | 4.772325 / 2.077655 (2.694670) | 2.228424 / 1.504120 (0.724304) | 2.000915 / 1.541195 (0.459721) | 2.105799 / 1.468490 (0.637309) | 0.824235 / 4.584777 (-3.760542) | 4.511902 / 3.745712 (0.766189) | 4.723073 / 5.269862 (-0.546789) | 2.333442 / 4.565676 (-2.232235) | 0.101161 / 0.424275 (-0.323114) | 0.014403 / 0.007607 (0.006796) | 0.596395 / 0.226044 (0.370351) | 5.961046 / 2.268929 (3.692117) | 2.746679 / 55.444624 (-52.697946) | 2.352085 / 6.876477 (-4.524392) | 2.609812 / 2.142072 (0.467740) | 0.996950 / 4.805227 (-3.808277) | 0.197923 / 6.500664 (-6.302741) | 0.075546 / 0.075469 (0.000077) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.529896 / 1.841788 (-0.311892) | 18.183887 / 8.074308 (10.109578) | 16.352332 / 10.191392 (6.160940) | 0.213504 / 0.680424 (-0.466920) | 0.020388 / 0.534201 (-0.513813) | 0.497832 / 0.579283 (-0.081451) | 0.495477 / 0.434364 (0.061113) | 0.585984 / 0.540337 (0.045647) | 0.688726 / 1.386936 (-0.698210) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008422 / 0.011353 (-0.002931) | 0.005876 / 0.011008 (-0.005132) | 0.089310 / 0.038508 (0.050802) | 0.039769 / 0.023109 (0.016660) | 0.425279 / 0.275898 (0.149381) | 0.470818 / 0.323480 (0.147338) | 0.006519 / 0.007986 (-0.001467) | 0.006276 / 0.004328 (0.001948) | 0.085753 / 0.004250 (0.081503) | 0.053867 / 0.037052 (0.016815) | 0.429193 / 0.258489 (0.170704) | 0.480278 / 0.293841 (0.186437) | 0.040657 / 0.128546 (-0.087889) | 0.014055 / 0.075646 (-0.061591) | 0.101422 / 0.419271 (-0.317849) | 0.053803 / 0.043533 (0.010271) | 0.428348 / 0.255139 (0.173209) | 0.452193 / 0.283200 (0.168994) | 0.124914 / 0.141683 (-0.016769) | 1.750122 / 1.452155 (0.297968) | 1.850875 / 1.492716 (0.358159) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.249958 / 0.018006 (0.231952) | 0.485183 / 0.000490 (0.484694) | 0.000472 / 0.000200 (0.000272) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034563 / 0.037411 (-0.002848) | 0.135565 / 0.014526 (0.121039) | 0.143271 / 0.176557 (-0.033285) | 0.199080 / 0.737135 (-0.538056) | 0.149336 / 0.296338 (-0.147003) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.526170 / 0.215209 (0.310961) | 5.270960 / 2.077655 (3.193305) | 2.664585 / 1.504120 (1.160465) | 2.440027 / 1.541195 (0.898832) | 2.612764 / 1.468490 (1.144274) | 0.828965 / 4.584777 (-3.755812) | 4.769983 / 3.745712 (1.024271) | 2.441962 / 5.269862 (-2.827900) | 1.549032 / 4.565676 (-3.016644) | 0.100851 / 0.424275 (-0.323424) | 0.014425 / 0.007607 (0.006818) | 0.640908 / 0.226044 (0.414864) | 6.399041 / 2.268929 (4.130113) | 3.242424 / 55.444624 (-52.202200) | 2.836317 / 6.876477 (-4.040160) | 2.933010 / 2.142072 (0.790938) | 1.002277 / 4.805227 (-3.802950) | 0.201247 / 6.500664 (-6.299417) | 0.078777 / 0.075469 (0.003308) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.620415 / 1.841788 (-0.221373) | 19.153631 / 8.074308 (11.079323) | 16.744068 / 10.191392 (6.552676) | 0.167327 / 0.680424 (-0.513097) | 0.020186 / 0.534201 (-0.514015) | 0.503683 / 0.579283 (-0.075600) | 0.500051 / 0.434364 (0.065687) | 0.587188 / 0.540337 (0.046850) | 0.699975 / 1.386936 (-0.686961) |\n\n</details>\n</details>\n\n\n",
"This is probably ready, but likely conflicts with #5883. I'll wait for that PR to be merged and then rebase and merge this one.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008387 / 0.011353 (-0.002965) | 0.005824 / 0.011008 (-0.005184) | 0.117721 / 0.038508 (0.079213) | 0.040420 / 0.023109 (0.017311) | 0.404961 / 0.275898 (0.129063) | 0.426695 / 0.323480 (0.103215) | 0.006634 / 0.007986 (-0.001352) | 0.006033 / 0.004328 (0.001705) | 0.088652 / 0.004250 (0.084402) | 0.048075 / 0.037052 (0.011022) | 0.400683 / 0.258489 (0.142194) | 0.432489 / 0.293841 (0.138648) | 0.042065 / 0.128546 (-0.086482) | 0.014071 / 0.075646 (-0.061575) | 0.399398 / 0.419271 (-0.019873) | 0.066034 / 0.043533 (0.022501) | 0.400056 / 0.255139 (0.144918) | 0.421130 / 0.283200 (0.137930) | 0.119721 / 0.141683 (-0.021962) | 1.752166 / 1.452155 (0.300011) | 1.820161 / 1.492716 (0.327444) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.244264 / 0.018006 (0.226258) | 0.480882 / 0.000490 (0.480392) | 0.005604 / 0.000200 (0.005404) | 0.000175 / 0.000054 (0.000121) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032397 / 0.037411 (-0.005015) | 0.131632 / 0.014526 (0.117106) | 0.139765 / 0.176557 (-0.036792) | 0.213135 / 0.737135 (-0.524000) | 0.147891 / 0.296338 (-0.148447) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.474534 / 0.215209 (0.259325) | 4.730424 / 2.077655 (2.652770) | 2.163706 / 1.504120 (0.659586) | 1.936051 / 1.541195 (0.394857) | 2.012185 / 1.468490 (0.543695) | 0.826583 / 4.584777 (-3.758194) | 4.921494 / 3.745712 (1.175782) | 2.431401 / 5.269862 (-2.838460) | 1.566020 / 4.565676 (-2.999656) | 0.101255 / 0.424275 (-0.323020) | 0.014553 / 0.007607 (0.006946) | 0.608301 / 0.226044 (0.382256) | 6.089801 / 2.268929 (3.820873) | 2.691986 / 55.444624 (-52.752638) | 2.296498 / 6.876477 (-4.579979) | 2.455388 / 2.142072 (0.313315) | 0.984342 / 4.805227 (-3.820885) | 0.200447 / 6.500664 (-6.300217) | 0.077602 / 0.075469 (0.002133) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.445067 / 1.841788 (-0.396721) | 18.588670 / 8.074308 (10.514362) | 16.950216 / 10.191392 (6.758824) | 0.169688 / 0.680424 (-0.510736) | 0.020544 / 0.534201 (-0.513657) | 0.508506 / 0.579283 (-0.070777) | 0.516218 / 0.434364 (0.081854) | 0.646072 / 0.540337 (0.105734) | 0.763227 / 1.386936 (-0.623709) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008816 / 0.011353 (-0.002537) | 0.006016 / 0.011008 (-0.004992) | 0.090946 / 0.038508 (0.052438) | 0.040189 / 0.023109 (0.017080) | 0.446723 / 0.275898 (0.170825) | 0.494633 / 0.323480 (0.171153) | 0.007206 / 0.007986 (-0.000779) | 0.004508 / 0.004328 (0.000180) | 0.088477 / 0.004250 (0.084226) | 0.055587 / 0.037052 (0.018535) | 0.445349 / 0.258489 (0.186860) | 0.504940 / 0.293841 (0.211099) | 0.041976 / 0.128546 (-0.086570) | 0.014296 / 0.075646 (-0.061351) | 0.102835 / 0.419271 (-0.316436) | 0.054786 / 0.043533 (0.011253) | 0.444789 / 0.255139 (0.189651) | 0.472306 / 0.283200 (0.189106) | 0.123365 / 0.141683 (-0.018318) | 1.725803 / 1.452155 (0.273648) | 1.832216 / 1.492716 (0.339500) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.252680 / 0.018006 (0.234674) | 0.476719 / 0.000490 (0.476229) | 0.000461 / 0.000200 (0.000261) | 0.000067 / 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035961 / 0.037411 (-0.001450) | 0.135399 / 0.014526 (0.120873) | 0.147549 / 0.176557 (-0.029007) | 0.207468 / 0.737135 (-0.529667) | 0.151591 / 0.296338 (-0.144747) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.528143 / 0.215209 (0.312934) | 5.270766 / 2.077655 (3.193111) | 2.675644 / 1.504120 (1.171524) | 2.472855 / 1.541195 (0.931660) | 2.636020 / 1.468490 (1.167530) | 0.841325 / 4.584777 (-3.743452) | 4.702290 / 3.745712 (0.956578) | 2.523537 / 5.269862 (-2.746325) | 1.595617 / 4.565676 (-2.970059) | 0.102095 / 0.424275 (-0.322180) | 0.014568 / 0.007607 (0.006961) | 0.652090 / 0.226044 (0.426046) | 6.503086 / 2.268929 (4.234158) | 3.277025 / 55.444624 (-52.167599) | 2.931264 / 6.876477 (-3.945213) | 3.021667 / 2.142072 (0.879594) | 1.002560 / 4.805227 (-3.802668) | 0.202621 / 6.500664 (-6.298043) | 0.080583 / 0.075469 (0.005114) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.639281 / 1.841788 (-0.202507) | 18.911529 / 8.074308 (10.837220) | 17.082795 / 10.191392 (6.891403) | 0.179456 / 0.680424 (-0.500968) | 0.021740 / 0.534201 (-0.512460) | 0.526426 / 0.579283 (-0.052857) | 0.535083 / 0.434364 (0.100719) | 0.583304 / 0.540337 (0.042967) | 0.696733 / 1.386936 (-0.690203) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006823 / 0.011353 (-0.004530) | 0.004847 / 0.011008 (-0.006161) | 0.096038 / 0.038508 (0.057530) | 0.033037 / 0.023109 (0.009928) | 0.298379 / 0.275898 (0.022481) | 0.333319 / 0.323480 (0.009839) | 0.005343 / 0.007986 (-0.002643) | 0.003863 / 0.004328 (-0.000465) | 0.072928 / 0.004250 (0.068678) | 0.040898 / 0.037052 (0.003846) | 0.303116 / 0.258489 (0.044627) | 0.334021 / 0.293841 (0.040181) | 0.034780 / 0.128546 (-0.093767) | 0.011978 / 0.075646 (-0.063668) | 0.331642 / 0.419271 (-0.087629) | 0.052729 / 0.043533 (0.009196) | 0.298586 / 0.255139 (0.043447) | 0.319296 / 0.283200 (0.036097) | 0.097711 / 0.141683 (-0.043972) | 1.416899 / 1.452155 (-0.035256) | 1.546008 / 1.492716 (0.053292) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.234303 / 0.018006 (0.216296) | 0.492767 / 0.000490 (0.492278) | 0.004935 / 0.000200 (0.004736) | 0.000106 / 0.000054 (0.000051) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030617 / 0.037411 (-0.006795) | 0.121203 / 0.014526 (0.106677) | 0.126677 / 0.176557 (-0.049879) | 0.186379 / 0.737135 (-0.550756) | 0.129849 / 0.296338 (-0.166490) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.416324 / 0.215209 (0.201115) | 4.135563 / 2.077655 (2.057908) | 1.976182 / 1.504120 (0.472062) | 1.807611 / 1.541195 (0.266416) | 1.886282 / 1.468490 (0.417792) | 0.713006 / 4.584777 (-3.871771) | 3.899205 / 3.745712 (0.153493) | 2.283427 / 5.269862 (-2.986435) | 1.543088 / 4.565676 (-3.022589) | 0.086189 / 0.424275 (-0.338087) | 0.012908 / 0.007607 (0.005301) | 0.516156 / 0.226044 (0.290112) | 5.144199 / 2.268929 (2.875271) | 2.460142 / 55.444624 (-52.984482) | 2.209054 / 6.876477 (-4.667423) | 2.325277 / 2.142072 (0.183204) | 0.849890 / 4.805227 (-3.955337) | 0.173687 / 6.500664 (-6.326977) | 0.070178 / 0.075469 (-0.005291) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.241790 / 1.841788 (-0.599997) | 16.047257 / 8.074308 (7.972949) | 15.774146 / 10.191392 (5.582754) | 0.145871 / 0.680424 (-0.534553) | 0.018106 / 0.534201 (-0.516095) | 0.433642 / 0.579283 (-0.145641) | 0.425311 / 0.434364 (-0.009053) | 0.533963 / 0.540337 (-0.006375) | 0.638786 / 1.386936 (-0.748151) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007242 / 0.011353 (-0.004111) | 0.005599 / 0.011008 (-0.005410) | 0.073443 / 0.038508 (0.034935) | 0.033764 / 0.023109 (0.010655) | 0.365990 / 0.275898 (0.090092) | 0.392943 / 0.323480 (0.069463) | 0.005987 / 0.007986 (-0.001999) | 0.004312 / 0.004328 (-0.000016) | 0.072831 / 0.004250 (0.068580) | 0.048854 / 0.037052 (0.011802) | 0.362477 / 0.258489 (0.103988) | 0.399993 / 0.293841 (0.106152) | 0.035602 / 0.128546 (-0.092944) | 0.012445 / 0.075646 (-0.063202) | 0.085768 / 0.419271 (-0.333504) | 0.048544 / 0.043533 (0.005011) | 0.362246 / 0.255139 (0.107107) | 0.388753 / 0.283200 (0.105554) | 0.109829 / 0.141683 (-0.031854) | 1.546881 / 1.452155 (0.094726) | 1.619454 / 1.492716 (0.126737) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.189926 / 0.018006 (0.171920) | 0.447936 / 0.000490 (0.447446) | 0.002354 / 0.000200 (0.002155) | 0.000090 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031740 / 0.037411 (-0.005671) | 0.122595 / 0.014526 (0.108069) | 0.128389 / 0.176557 (-0.048168) | 0.180570 / 0.737135 (-0.556566) | 0.132939 / 0.296338 (-0.163399) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.425073 / 0.215209 (0.209863) | 4.238964 / 2.077655 (2.161309) | 2.095116 / 1.504120 (0.590996) | 1.913925 / 1.541195 (0.372730) | 2.024669 / 1.468490 (0.556179) | 0.699172 / 4.584777 (-3.885605) | 3.845807 / 3.745712 (0.100094) | 2.167502 / 5.269862 (-3.102360) | 1.375267 / 4.565676 (-3.190410) | 0.086739 / 0.424275 (-0.337536) | 0.012198 / 0.007607 (0.004591) | 0.525975 / 0.226044 (0.299931) | 5.249449 / 2.268929 (2.980521) | 2.550565 / 55.444624 (-52.894060) | 2.257557 / 6.876477 (-4.618920) | 2.298936 / 2.142072 (0.156863) | 0.850295 / 4.805227 (-3.954932) | 0.170506 / 6.500664 (-6.330158) | 0.065659 / 0.075469 (-0.009810) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.330556 / 1.841788 (-0.511231) | 16.920203 / 8.074308 (8.845894) | 15.966739 / 10.191392 (5.775347) | 0.164000 / 0.680424 (-0.516424) | 0.018211 / 0.534201 (-0.515990) | 0.436253 / 0.579283 (-0.143030) | 0.449666 / 0.434364 (0.015302) | 0.522287 / 0.540337 (-0.018050) | 0.615944 / 1.386936 (-0.770992) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007273 / 0.011353 (-0.004080) | 0.005198 / 0.011008 (-0.005810) | 0.114362 / 0.038508 (0.075854) | 0.031113 / 0.023109 (0.008003) | 0.378568 / 0.275898 (0.102670) | 0.441695 / 0.323480 (0.118215) | 0.006037 / 0.007986 (-0.001949) | 0.005102 / 0.004328 (0.000774) | 0.098682 / 0.004250 (0.094432) | 0.042797 / 0.037052 (0.005745) | 0.360028 / 0.258489 (0.101539) | 0.435757 / 0.293841 (0.141916) | 0.041438 / 0.128546 (-0.087109) | 0.013728 / 0.075646 (-0.061918) | 0.376154 / 0.419271 (-0.043117) | 0.075324 / 0.043533 (0.031791) | 0.357221 / 0.255139 (0.102082) | 0.416378 / 0.283200 (0.133178) | 0.110707 / 0.141683 (-0.030975) | 1.603215 / 1.452155 (0.151061) | 1.736843 / 1.492716 (0.244127) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.249479 / 0.018006 (0.231473) | 0.513205 / 0.000490 (0.512715) | 0.003856 / 0.000200 (0.003656) | 0.000100 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027750 / 0.037411 (-0.009661) | 0.105437 / 0.014526 (0.090911) | 0.115903 / 0.176557 (-0.060653) | 0.179662 / 0.737135 (-0.557474) | 0.116305 / 0.296338 (-0.180033) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.551681 / 0.215209 (0.336472) | 5.544590 / 2.077655 (3.466935) | 2.193933 / 1.504120 (0.689813) | 1.898395 / 1.541195 (0.357201) | 1.877288 / 1.468490 (0.408798) | 0.858097 / 4.584777 (-3.726680) | 4.920982 / 3.745712 (1.175270) | 2.478220 / 5.269862 (-2.791641) | 1.779608 / 4.565676 (-2.786069) | 0.101321 / 0.424275 (-0.322954) | 0.012627 / 0.007607 (0.005020) | 0.674865 / 0.226044 (0.448820) | 6.808224 / 2.268929 (4.539295) | 2.822466 / 55.444624 (-52.622159) | 2.170379 / 6.876477 (-4.706098) | 2.224278 / 2.142072 (0.082205) | 1.032763 / 4.805227 (-3.772464) | 0.198851 / 6.500664 (-6.301813) | 0.069249 / 0.075469 (-0.006220) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.425987 / 1.841788 (-0.415801) | 16.212942 / 8.074308 (8.138634) | 18.945770 / 10.191392 (8.754378) | 0.192901 / 0.680424 (-0.487522) | 0.025343 / 0.534201 (-0.508858) | 0.465441 / 0.579283 (-0.113842) | 0.540966 / 0.434364 (0.106602) | 0.576736 / 0.540337 (0.036399) | 0.675717 / 1.386936 (-0.711219) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007426 / 0.011353 (-0.003927) | 0.005023 / 0.011008 (-0.005985) | 0.085083 / 0.038508 (0.046575) | 0.030559 / 0.023109 (0.007449) | 0.398461 / 0.275898 (0.122563) | 0.418998 / 0.323480 (0.095518) | 0.006697 / 0.007986 (-0.001288) | 0.004665 / 0.004328 (0.000337) | 0.087724 / 0.004250 (0.083473) | 0.045799 / 0.037052 (0.008747) | 0.395165 / 0.258489 (0.136676) | 0.430172 / 0.293841 (0.136331) | 0.040486 / 0.128546 (-0.088060) | 0.014237 / 0.075646 (-0.061409) | 0.099429 / 0.419271 (-0.319843) | 0.056006 / 0.043533 (0.012473) | 0.389046 / 0.255139 (0.133907) | 0.419559 / 0.283200 (0.136359) | 0.108550 / 0.141683 (-0.033132) | 1.614052 / 1.452155 (0.161897) | 1.677785 / 1.492716 (0.185069) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.202178 / 0.018006 (0.184172) | 0.486365 / 0.000490 (0.485875) | 0.003844 / 0.000200 (0.003644) | 0.000112 / 0.000054 (0.000058) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027963 / 0.037411 (-0.009449) | 0.110399 / 0.014526 (0.095873) | 0.122266 / 0.176557 (-0.054291) | 0.178551 / 0.737135 (-0.558585) | 0.129259 / 0.296338 (-0.167080) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.604178 / 0.215209 (0.388969) | 6.135943 / 2.077655 (4.058288) | 2.547576 / 1.504120 (1.043456) | 2.262470 / 1.541195 (0.721276) | 2.275402 / 1.468490 (0.806912) | 0.878804 / 4.584777 (-3.705972) | 5.152200 / 3.745712 (1.406488) | 2.553715 / 5.269862 (-2.716147) | 1.580959 / 4.565676 (-2.984717) | 0.107895 / 0.424275 (-0.316380) | 0.012751 / 0.007607 (0.005143) | 0.770678 / 0.226044 (0.544633) | 7.744303 / 2.268929 (5.475374) | 3.342037 / 55.444624 (-52.102588) | 2.756848 / 6.876477 (-4.119629) | 2.739357 / 2.142072 (0.597285) | 1.086330 / 4.805227 (-3.718897) | 0.230983 / 6.500664 (-6.269681) | 0.073771 / 0.075469 (-0.001698) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.493441 / 1.841788 (-0.348347) | 16.621611 / 8.074308 (8.547303) | 19.081000 / 10.191392 (8.889608) | 0.215623 / 0.680424 (-0.464801) | 0.025660 / 0.534201 (-0.508541) | 0.446490 / 0.579283 (-0.132793) | 0.560078 / 0.434364 (0.125714) | 0.527231 / 0.540337 (-0.013106) | 0.636551 / 1.386936 (-0.750385) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008266 / 0.011353 (-0.003087) | 0.005082 / 0.011008 (-0.005927) | 0.119858 / 0.038508 (0.081350) | 0.032907 / 0.023109 (0.009798) | 0.362816 / 0.275898 (0.086918) | 0.403684 / 0.323480 (0.080204) | 0.006296 / 0.007986 (-0.001690) | 0.006220 / 0.004328 (0.001891) | 0.095609 / 0.004250 (0.091359) | 0.048734 / 0.037052 (0.011682) | 0.385724 / 0.258489 (0.127235) | 0.424315 / 0.293841 (0.130475) | 0.042344 / 0.128546 (-0.086202) | 0.016147 / 0.075646 (-0.059500) | 0.409661 / 0.419271 (-0.009610) | 0.057900 / 0.043533 (0.014367) | 0.387013 / 0.255139 (0.131874) | 0.388901 / 0.283200 (0.105702) | 0.103920 / 0.141683 (-0.037762) | 1.732730 / 1.452155 (0.280575) | 1.863912 / 1.492716 (0.371196) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237406 / 0.018006 (0.219400) | 0.514398 / 0.000490 (0.513909) | 0.005941 / 0.000200 (0.005741) | 0.000109 / 0.000054 (0.000054) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027524 / 0.037411 (-0.009888) | 0.116498 / 0.014526 (0.101972) | 0.129034 / 0.176557 (-0.047522) | 0.218272 / 0.737135 (-0.518864) | 0.148389 / 0.296338 (-0.147950) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.604555 / 0.215209 (0.389346) | 5.921576 / 2.077655 (3.843921) | 2.410483 / 1.504120 (0.906363) | 2.220286 / 1.541195 (0.679092) | 2.138880 / 1.468490 (0.670390) | 0.934962 / 4.584777 (-3.649815) | 5.808855 / 3.745712 (2.063143) | 4.881554 / 5.269862 (-0.388308) | 2.536408 / 4.565676 (-2.029268) | 0.124260 / 0.424275 (-0.300015) | 0.017798 / 0.007607 (0.010190) | 0.778991 / 0.226044 (0.552947) | 7.899262 / 2.268929 (5.630333) | 3.208667 / 55.444624 (-52.235957) | 2.631182 / 6.876477 (-4.245295) | 2.676199 / 2.142072 (0.534127) | 1.165516 / 4.805227 (-3.639711) | 0.228751 / 6.500664 (-6.271913) | 0.081378 / 0.075469 (0.005909) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.522156 / 1.841788 (-0.319632) | 17.975381 / 8.074308 (9.901073) | 18.918882 / 10.191392 (8.727490) | 0.223984 / 0.680424 (-0.456440) | 0.025171 / 0.534201 (-0.509030) | 0.467894 / 0.579283 (-0.111389) | 0.559501 / 0.434364 (0.125137) | 0.550392 / 0.540337 (0.010055) | 0.696923 / 1.386936 (-0.690013) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008577 / 0.011353 (-0.002775) | 0.006735 / 0.011008 (-0.004273) | 0.095108 / 0.038508 (0.056600) | 0.035059 / 0.023109 (0.011950) | 0.448576 / 0.275898 (0.172677) | 0.492049 / 0.323480 (0.168569) | 0.006600 / 0.007986 (-0.001385) | 0.004760 / 0.004328 (0.000431) | 0.094670 / 0.004250 (0.090419) | 0.052543 / 0.037052 (0.015491) | 0.458927 / 0.258489 (0.200438) | 0.511522 / 0.293841 (0.217681) | 0.046046 / 0.128546 (-0.082500) | 0.015227 / 0.075646 (-0.060419) | 0.114585 / 0.419271 (-0.304686) | 0.057569 / 0.043533 (0.014036) | 0.441989 / 0.255139 (0.186850) | 0.487001 / 0.283200 (0.203801) | 0.115688 / 0.141683 (-0.025995) | 1.777366 / 1.452155 (0.325211) | 1.906216 / 1.492716 (0.413499) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224880 / 0.018006 (0.206874) | 0.504153 / 0.000490 (0.503664) | 0.001143 / 0.000200 (0.000943) | 0.000111 / 0.000054 (0.000056) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033618 / 0.037411 (-0.003793) | 0.127396 / 0.014526 (0.112870) | 0.135648 / 0.176557 (-0.040909) | 0.193140 / 0.737135 (-0.543995) | 0.142129 / 0.296338 (-0.154209) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.692845 / 0.215209 (0.477636) | 6.804897 / 2.077655 (4.727242) | 2.851041 / 1.504120 (1.346921) | 2.480698 / 1.541195 (0.939504) | 2.488619 / 1.468490 (1.020129) | 0.970439 / 4.584777 (-3.614338) | 5.466059 / 3.745712 (1.720347) | 2.790261 / 5.269862 (-2.479601) | 1.727638 / 4.565676 (-2.838039) | 0.116345 / 0.424275 (-0.307930) | 0.014348 / 0.007607 (0.006740) | 0.845510 / 0.226044 (0.619465) | 8.397198 / 2.268929 (6.128270) | 3.591998 / 55.444624 (-51.852626) | 2.858339 / 6.876477 (-4.018137) | 2.905075 / 2.142072 (0.763003) | 1.193569 / 4.805227 (-3.611658) | 0.243091 / 6.500664 (-6.257573) | 0.082198 / 0.075469 (0.006729) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.610327 / 1.841788 (-0.231461) | 17.191414 / 8.074308 (9.117106) | 20.176518 / 10.191392 (9.985126) | 0.246574 / 0.680424 (-0.433850) | 0.024343 / 0.534201 (-0.509858) | 0.482091 / 0.579283 (-0.097192) | 0.585241 / 0.434364 (0.150877) | 0.558833 / 0.540337 (0.018496) | 0.654811 / 1.386936 (-0.732125) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006353 / 0.011353 (-0.004999) | 0.004393 / 0.011008 (-0.006616) | 0.098751 / 0.038508 (0.060242) | 0.029090 / 0.023109 (0.005981) | 0.304169 / 0.275898 (0.028271) | 0.339879 / 0.323480 (0.016399) | 0.005577 / 0.007986 (-0.002408) | 0.003516 / 0.004328 (-0.000813) | 0.077347 / 0.004250 (0.073097) | 0.041935 / 0.037052 (0.004882) | 0.305865 / 0.258489 (0.047376) | 0.357063 / 0.293841 (0.063222) | 0.025245 / 0.128546 (-0.103301) | 0.008753 / 0.075646 (-0.066893) | 0.316734 / 0.419271 (-0.102538) | 0.043464 / 0.043533 (-0.000069) | 0.300944 / 0.255139 (0.045805) | 0.330091 / 0.283200 (0.046891) | 0.088593 / 0.141683 (-0.053090) | 1.588958 / 1.452155 (0.136803) | 1.641376 / 1.492716 (0.148660) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220290 / 0.018006 (0.202284) | 0.445430 / 0.000490 (0.444940) | 0.004800 / 0.000200 (0.004600) | 0.000075 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023828 / 0.037411 (-0.013583) | 0.103446 / 0.014526 (0.088920) | 0.110668 / 0.176557 (-0.065889) | 0.169604 / 0.737135 (-0.567531) | 0.114818 / 0.296338 (-0.181520) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.416951 / 0.215209 (0.201742) | 4.138917 / 2.077655 (2.061263) | 1.891265 / 1.504120 (0.387145) | 1.687068 / 1.541195 (0.145873) | 1.726618 / 1.468490 (0.258128) | 0.546977 / 4.584777 (-4.037800) | 3.536153 / 3.745712 (-0.209560) | 1.795206 / 5.269862 (-3.474656) | 1.019845 / 4.565676 (-3.545831) | 0.067040 / 0.424275 (-0.357235) | 0.012038 / 0.007607 (0.004431) | 0.520583 / 0.226044 (0.294539) | 5.211520 / 2.268929 (2.942591) | 2.336136 / 55.444624 (-53.108488) | 2.011262 / 6.876477 (-4.865215) | 2.137311 / 2.142072 (-0.004762) | 0.654779 / 4.805227 (-4.150448) | 0.134555 / 6.500664 (-6.366109) | 0.066427 / 0.075469 (-0.009042) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.240187 / 1.841788 (-0.601600) | 14.104063 / 8.074308 (6.029755) | 13.369572 / 10.191392 (3.178180) | 0.147891 / 0.680424 (-0.532533) | 0.016993 / 0.534201 (-0.517208) | 0.364863 / 0.579283 (-0.214420) | 0.398684 / 0.434364 (-0.035680) | 0.430524 / 0.540337 (-0.109813) | 0.520920 / 1.386936 (-0.866016) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006845 / 0.011353 (-0.004508) | 0.004420 / 0.011008 (-0.006588) | 0.078334 / 0.038508 (0.039825) | 0.030566 / 0.023109 (0.007457) | 0.409568 / 0.275898 (0.133670) | 0.458389 / 0.323480 (0.134910) | 0.005739 / 0.007986 (-0.002247) | 0.005222 / 0.004328 (0.000893) | 0.076066 / 0.004250 (0.071816) | 0.049239 / 0.037052 (0.012187) | 0.409841 / 0.258489 (0.151352) | 0.472250 / 0.293841 (0.178409) | 0.025463 / 0.128546 (-0.103084) | 0.008738 / 0.075646 (-0.066909) | 0.083114 / 0.419271 (-0.336157) | 0.041233 / 0.043533 (-0.002300) | 0.407158 / 0.255139 (0.152019) | 0.438724 / 0.283200 (0.155524) | 0.097974 / 0.141683 (-0.043709) | 1.536514 / 1.452155 (0.084360) | 1.636704 / 1.492716 (0.143987) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.240589 / 0.018006 (0.222583) | 0.440328 / 0.000490 (0.439838) | 0.000937 / 0.000200 (0.000737) | 0.000076 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027559 / 0.037411 (-0.009853) | 0.109930 / 0.014526 (0.095405) | 0.113366 / 0.176557 (-0.063190) | 0.166849 / 0.737135 (-0.570286) | 0.118872 / 0.296338 (-0.177467) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.474120 / 0.215209 (0.258911) | 4.739222 / 2.077655 (2.661567) | 2.484386 / 1.504120 (0.980266) | 2.281937 / 1.541195 (0.740742) | 2.362974 / 1.468490 (0.894484) | 0.549897 / 4.584777 (-4.034879) | 3.425540 / 3.745712 (-0.320172) | 1.765810 / 5.269862 (-3.504051) | 1.008277 / 4.565676 (-3.557400) | 0.067288 / 0.424275 (-0.356987) | 0.011954 / 0.007607 (0.004347) | 0.577216 / 0.226044 (0.351172) | 5.790659 / 2.268929 (3.521731) | 2.946732 / 55.444624 (-52.497892) | 2.608835 / 6.876477 (-4.267641) | 2.642987 / 2.142072 (0.500915) | 0.652798 / 4.805227 (-4.152429) | 0.135909 / 6.500664 (-6.364755) | 0.068480 / 0.075469 (-0.006989) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.353550 / 1.841788 (-0.488237) | 14.732084 / 8.074308 (6.657775) | 14.439174 / 10.191392 (4.247782) | 0.131445 / 0.680424 (-0.548979) | 0.016608 / 0.534201 (-0.517593) | 0.368103 / 0.579283 (-0.211180) | 0.393918 / 0.434364 (-0.040446) | 0.423562 / 0.540337 (-0.116776) | 0.515041 / 1.386936 (-0.871895) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006414 / 0.011353 (-0.004938) | 0.004704 / 0.011008 (-0.006305) | 0.096012 / 0.038508 (0.057504) | 0.032910 / 0.023109 (0.009800) | 0.290676 / 0.275898 (0.014778) | 0.319646 / 0.323480 (-0.003834) | 0.005806 / 0.007986 (-0.002180) | 0.004008 / 0.004328 (-0.000320) | 0.073982 / 0.004250 (0.069731) | 0.048985 / 0.037052 (0.011933) | 0.299498 / 0.258489 (0.041009) | 0.338118 / 0.293841 (0.044277) | 0.027680 / 0.128546 (-0.100866) | 0.009051 / 0.075646 (-0.066595) | 0.325051 / 0.419271 (-0.094221) | 0.051011 / 0.043533 (0.007478) | 0.292249 / 0.255139 (0.037110) | 0.315733 / 0.283200 (0.032533) | 0.100327 / 0.141683 (-0.041356) | 1.481862 / 1.452155 (0.029707) | 1.544884 / 1.492716 (0.052168) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.289610 / 0.018006 (0.271603) | 0.510164 / 0.000490 (0.509675) | 0.004726 / 0.000200 (0.004526) | 0.000090 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027617 / 0.037411 (-0.009794) | 0.107593 / 0.014526 (0.093068) | 0.122783 / 0.176557 (-0.053774) | 0.181086 / 0.737135 (-0.556049) | 0.128030 / 0.296338 (-0.168308) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.403571 / 0.215209 (0.188362) | 4.002881 / 2.077655 (1.925227) | 1.805550 / 1.504120 (0.301430) | 1.619165 / 1.541195 (0.077971) | 1.606536 / 1.468490 (0.138046) | 0.518917 / 4.584777 (-4.065860) | 3.731498 / 3.745712 (-0.014214) | 3.206645 / 5.269862 (-2.063217) | 1.641615 / 4.565676 (-2.924062) | 0.065100 / 0.424275 (-0.359175) | 0.011396 / 0.007607 (0.003789) | 0.500597 / 0.226044 (0.274553) | 4.992293 / 2.268929 (2.723364) | 2.278726 / 55.444624 (-53.165898) | 1.960823 / 6.876477 (-4.915654) | 2.038684 / 2.142072 (-0.103388) | 0.640910 / 4.805227 (-4.164318) | 0.140597 / 6.500664 (-6.360067) | 0.062114 / 0.075469 (-0.013355) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.167366 / 1.841788 (-0.674422) | 14.748193 / 8.074308 (6.673884) | 13.592381 / 10.191392 (3.400989) | 0.165341 / 0.680424 (-0.515083) | 0.017360 / 0.534201 (-0.516841) | 0.393448 / 0.579283 (-0.185836) | 0.422951 / 0.434364 (-0.011413) | 0.460491 / 0.540337 (-0.079847) | 0.558238 / 1.386936 (-0.828698) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006373 / 0.011353 (-0.004980) | 0.004587 / 0.011008 (-0.006421) | 0.076421 / 0.038508 (0.037913) | 0.032162 / 0.023109 (0.009052) | 0.385531 / 0.275898 (0.109633) | 0.410424 / 0.323480 (0.086944) | 0.006154 / 0.007986 (-0.001832) | 0.005533 / 0.004328 (0.001205) | 0.077035 / 0.004250 (0.072784) | 0.051571 / 0.037052 (0.014519) | 0.393283 / 0.258489 (0.134794) | 0.433756 / 0.293841 (0.139915) | 0.028381 / 0.128546 (-0.100165) | 0.009034 / 0.075646 (-0.066613) | 0.083836 / 0.419271 (-0.335435) | 0.048246 / 0.043533 (0.004713) | 0.385437 / 0.255139 (0.130298) | 0.394187 / 0.283200 (0.110987) | 0.105453 / 0.141683 (-0.036230) | 1.459173 / 1.452155 (0.007018) | 1.575083 / 1.492716 (0.082367) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.320324 / 0.018006 (0.302318) | 0.502945 / 0.000490 (0.502455) | 0.004470 / 0.000200 (0.004270) | 0.000107 / 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028118 / 0.037411 (-0.009293) | 0.111430 / 0.014526 (0.096904) | 0.123141 / 0.176557 (-0.053415) | 0.175215 / 0.737135 (-0.561920) | 0.126429 / 0.296338 (-0.169909) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.433407 / 0.215209 (0.218198) | 4.329945 / 2.077655 (2.252291) | 2.096822 / 1.504120 (0.592702) | 1.908173 / 1.541195 (0.366978) | 1.967167 / 1.468490 (0.498676) | 0.529207 / 4.584777 (-4.055570) | 3.798424 / 3.745712 (0.052712) | 3.050716 / 5.269862 (-2.219146) | 1.445009 / 4.565676 (-3.120668) | 0.066467 / 0.424275 (-0.357809) | 0.011698 / 0.007607 (0.004090) | 0.528660 / 0.226044 (0.302615) | 5.282069 / 2.268929 (3.013141) | 2.535501 / 55.444624 (-52.909124) | 2.202856 / 6.876477 (-4.673621) | 2.293225 / 2.142072 (0.151153) | 0.640216 / 4.805227 (-4.165011) | 0.140884 / 6.500664 (-6.359780) | 0.064231 / 0.075469 (-0.011238) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.292129 / 1.841788 (-0.549659) | 15.371370 / 8.074308 (7.297062) | 15.114854 / 10.191392 (4.923462) | 0.176870 / 0.680424 (-0.503554) | 0.017380 / 0.534201 (-0.516821) | 0.398156 / 0.579283 (-0.181127) | 0.442277 / 0.434364 (0.007913) | 0.467093 / 0.540337 (-0.073244) | 0.561599 / 1.386936 (-0.825337) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009360 / 0.011353 (-0.001993) | 0.006297 / 0.011008 (-0.004712) | 0.133131 / 0.038508 (0.094623) | 0.040261 / 0.023109 (0.017152) | 0.419101 / 0.275898 (0.143203) | 0.453087 / 0.323480 (0.129607) | 0.007718 / 0.007986 (-0.000268) | 0.005698 / 0.004328 (0.001369) | 0.102261 / 0.004250 (0.098010) | 0.055147 / 0.037052 (0.018095) | 0.428355 / 0.258489 (0.169866) | 0.505241 / 0.293841 (0.211400) | 0.046745 / 0.128546 (-0.081802) | 0.015559 / 0.075646 (-0.060088) | 0.441775 / 0.419271 (0.022503) | 0.070165 / 0.043533 (0.026632) | 0.421957 / 0.255139 (0.166818) | 0.445156 / 0.283200 (0.161957) | 0.126321 / 0.141683 (-0.015362) | 1.900486 / 1.452155 (0.448331) | 2.088630 / 1.492716 (0.595913) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.260244 / 0.018006 (0.242237) | 0.606317 / 0.000490 (0.605828) | 0.006827 / 0.000200 (0.006627) | 0.000117 / 0.000054 (0.000063) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031958 / 0.037411 (-0.005453) | 0.139362 / 0.014526 (0.124836) | 0.148748 / 0.176557 (-0.027809) | 0.226269 / 0.737135 (-0.510866) | 0.161145 / 0.296338 (-0.135194) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.666287 / 0.215209 (0.451078) | 6.588707 / 2.077655 (4.511053) | 2.736155 / 1.504120 (1.232035) | 2.329601 / 1.541195 (0.788406) | 2.324991 / 1.468490 (0.856501) | 0.943608 / 4.584777 (-3.641169) | 6.051653 / 3.745712 (2.305941) | 2.929150 / 5.269862 (-2.340711) | 1.804461 / 4.565676 (-2.761216) | 0.113302 / 0.424275 (-0.310973) | 0.015245 / 0.007607 (0.007638) | 0.827029 / 0.226044 (0.600984) | 8.211536 / 2.268929 (5.942608) | 3.445231 / 55.444624 (-51.999393) | 2.756728 / 6.876477 (-4.119748) | 2.904039 / 2.142072 (0.761966) | 1.162339 / 4.805227 (-3.642888) | 0.231168 / 6.500664 (-6.269496) | 0.089038 / 0.075469 (0.013569) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.640619 / 1.841788 (-0.201169) | 20.034157 / 8.074308 (11.959849) | 22.346006 / 10.191392 (12.154614) | 0.255300 / 0.680424 (-0.425124) | 0.031452 / 0.534201 (-0.502749) | 0.563290 / 0.579283 (-0.015993) | 0.653556 / 0.434364 (0.219192) | 0.687663 / 0.540337 (0.147326) | 0.816432 / 1.386936 (-0.570504) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010340 / 0.011353 (-0.001013) | 0.006245 / 0.011008 (-0.004764) | 0.128012 / 0.038508 (0.089504) | 0.041799 / 0.023109 (0.018690) | 0.533340 / 0.275898 (0.257442) | 0.592243 / 0.323480 (0.268763) | 0.009256 / 0.007986 (0.001271) | 0.005310 / 0.004328 (0.000982) | 0.110973 / 0.004250 (0.106722) | 0.065465 / 0.037052 (0.028412) | 0.533845 / 0.258489 (0.275356) | 0.602190 / 0.293841 (0.308349) | 0.060245 / 0.128546 (-0.068301) | 0.016954 / 0.075646 (-0.058693) | 0.119727 / 0.419271 (-0.299545) | 0.064628 / 0.043533 (0.021095) | 0.558229 / 0.255139 (0.303090) | 0.563696 / 0.283200 (0.280496) | 0.137225 / 0.141683 (-0.004458) | 2.038605 / 1.452155 (0.586451) | 2.158655 / 1.492716 (0.665939) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.327067 / 0.018006 (0.309061) | 0.628812 / 0.000490 (0.628323) | 0.010259 / 0.000200 (0.010059) | 0.000123 / 0.000054 (0.000069) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037023 / 0.037411 (-0.000388) | 0.142462 / 0.014526 (0.127936) | 0.158165 / 0.176557 (-0.018392) | 0.220808 / 0.737135 (-0.516328) | 0.163608 / 0.296338 (-0.132731) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.776119 / 0.215209 (0.560910) | 7.813044 / 2.077655 (5.735389) | 3.610901 / 1.504120 (2.106781) | 3.195144 / 1.541195 (1.653950) | 3.218245 / 1.468490 (1.749755) | 1.092732 / 4.584777 (-3.492045) | 5.965526 / 3.745712 (2.219813) | 2.914683 / 5.269862 (-2.355179) | 1.848397 / 4.565676 (-2.717280) | 0.114436 / 0.424275 (-0.309839) | 0.014794 / 0.007607 (0.007187) | 0.887141 / 0.226044 (0.661096) | 9.009743 / 2.268929 (6.740815) | 4.180143 / 55.444624 (-51.264481) | 3.452194 / 6.876477 (-3.424283) | 3.493520 / 2.142072 (1.351448) | 1.233327 / 4.805227 (-3.571900) | 0.235390 / 6.500664 (-6.265274) | 0.099544 / 0.075469 (0.024075) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.853482 / 1.841788 (0.011694) | 20.071177 / 8.074308 (11.996869) | 24.507618 / 10.191392 (14.316226) | 0.260164 / 0.680424 (-0.420260) | 0.028433 / 0.534201 (-0.505768) | 0.549181 / 0.579283 (-0.030102) | 0.650069 / 0.434364 (0.215705) | 0.629541 / 0.540337 (0.089203) | 0.808932 / 1.386936 (-0.578004) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009537 / 0.011353 (-0.001816) | 0.006036 / 0.011008 (-0.004972) | 0.141210 / 0.038508 (0.102701) | 0.037493 / 0.023109 (0.014384) | 0.404285 / 0.275898 (0.128386) | 0.458906 / 0.323480 (0.135427) | 0.007224 / 0.007986 (-0.000761) | 0.005148 / 0.004328 (0.000819) | 0.103889 / 0.004250 (0.099639) | 0.048877 / 0.037052 (0.011824) | 0.413220 / 0.258489 (0.154731) | 0.458153 / 0.293841 (0.164312) | 0.046008 / 0.128546 (-0.082538) | 0.015116 / 0.075646 (-0.060531) | 0.439836 / 0.419271 (0.020565) | 0.067527 / 0.043533 (0.023994) | 0.435794 / 0.255139 (0.180656) | 0.451687 / 0.283200 (0.168487) | 0.121274 / 0.141683 (-0.020409) | 1.950199 / 1.452155 (0.498044) | 2.035589 / 1.492716 (0.542873) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.247056 / 0.018006 (0.229050) | 0.550348 / 0.000490 (0.549858) | 0.005504 / 0.000200 (0.005305) | 0.000116 / 0.000054 (0.000061) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032171 / 0.037411 (-0.005240) | 0.135983 / 0.014526 (0.121457) | 0.149587 / 0.176557 (-0.026970) | 0.233414 / 0.737135 (-0.503722) | 0.152598 / 0.296338 (-0.143740) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.634813 / 0.215209 (0.419604) | 6.453619 / 2.077655 (4.375964) | 2.582070 / 1.504120 (1.077951) | 2.214292 / 1.541195 (0.673097) | 2.220012 / 1.468490 (0.751522) | 0.987374 / 4.584777 (-3.597403) | 5.543760 / 3.745712 (1.798047) | 2.808865 / 5.269862 (-2.460996) | 1.714713 / 4.565676 (-2.850963) | 0.111016 / 0.424275 (-0.313259) | 0.014688 / 0.007607 (0.007081) | 0.842542 / 0.226044 (0.616498) | 8.414336 / 2.268929 (6.145407) | 3.501021 / 55.444624 (-51.943604) | 2.665335 / 6.876477 (-4.211142) | 2.843706 / 2.142072 (0.701633) | 1.196398 / 4.805227 (-3.608829) | 0.245508 / 6.500664 (-6.255156) | 0.086970 / 0.075469 (0.011501) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.590244 / 1.841788 (-0.251544) | 18.694141 / 8.074308 (10.619833) | 21.752463 / 10.191392 (11.561071) | 0.264511 / 0.680424 (-0.415913) | 0.028713 / 0.534201 (-0.505488) | 0.531102 / 0.579283 (-0.048181) | 0.626302 / 0.434364 (0.191938) | 0.624541 / 0.540337 (0.084203) | 0.745745 / 1.386936 (-0.641191) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010097 / 0.011353 (-0.001256) | 0.005558 / 0.011008 (-0.005451) | 0.111326 / 0.038508 (0.072818) | 0.036465 / 0.023109 (0.013356) | 0.472116 / 0.275898 (0.196218) | 0.524479 / 0.323480 (0.200999) | 0.007466 / 0.007986 (-0.000520) | 0.005440 / 0.004328 (0.001112) | 0.103482 / 0.004250 (0.099231) | 0.053217 / 0.037052 (0.016165) | 0.476685 / 0.258489 (0.218196) | 0.554011 / 0.293841 (0.260170) | 0.047157 / 0.128546 (-0.081390) | 0.015895 / 0.075646 (-0.059751) | 0.115997 / 0.419271 (-0.303274) | 0.062290 / 0.043533 (0.018758) | 0.474166 / 0.255139 (0.219027) | 0.498854 / 0.283200 (0.215655) | 0.121798 / 0.141683 (-0.019885) | 1.956583 / 1.452155 (0.504428) | 2.069620 / 1.492716 (0.576904) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.278637 / 0.018006 (0.260631) | 0.555295 / 0.000490 (0.554805) | 0.007401 / 0.000200 (0.007201) | 0.000121 / 0.000054 (0.000066) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033576 / 0.037411 (-0.003835) | 0.136479 / 0.014526 (0.121954) | 0.153960 / 0.176557 (-0.022597) | 0.203422 / 0.737135 (-0.533713) | 0.154159 / 0.296338 (-0.142180) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.672561 / 0.215209 (0.457352) | 6.956675 / 2.077655 (4.879020) | 3.063636 / 1.504120 (1.559516) | 2.668256 / 1.541195 (1.127061) | 2.794793 / 1.468490 (1.326303) | 0.964242 / 4.584777 (-3.620535) | 5.785992 / 3.745712 (2.040279) | 2.850079 / 5.269862 (-2.419782) | 1.782491 / 4.565676 (-2.783186) | 0.114859 / 0.424275 (-0.309416) | 0.015229 / 0.007607 (0.007622) | 0.858406 / 0.226044 (0.632362) | 8.646296 / 2.268929 (6.377367) | 3.842133 / 55.444624 (-51.602492) | 3.180017 / 6.876477 (-3.696460) | 3.241315 / 2.142072 (1.099243) | 1.248988 / 4.805227 (-3.556239) | 0.235075 / 6.500664 (-6.265589) | 0.087192 / 0.075469 (0.011723) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.783877 / 1.841788 (-0.057910) | 19.477223 / 8.074308 (11.402914) | 22.926734 / 10.191392 (12.735342) | 0.246970 / 0.680424 (-0.433454) | 0.026386 / 0.534201 (-0.507815) | 0.517599 / 0.579283 (-0.061684) | 0.626504 / 0.434364 (0.192140) | 0.606943 / 0.540337 (0.066606) | 0.739115 / 1.386936 (-0.647821) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008085 / 0.011353 (-0.003268) | 0.005568 / 0.011008 (-0.005440) | 0.119674 / 0.038508 (0.081166) | 0.040452 / 0.023109 (0.017343) | 0.360288 / 0.275898 (0.084390) | 0.409448 / 0.323480 (0.085968) | 0.007281 / 0.007986 (-0.000705) | 0.004931 / 0.004328 (0.000602) | 0.089956 / 0.004250 (0.085706) | 0.056088 / 0.037052 (0.019036) | 0.384708 / 0.258489 (0.126219) | 0.423506 / 0.293841 (0.129665) | 0.033280 / 0.128546 (-0.095266) | 0.010696 / 0.075646 (-0.064951) | 0.394851 / 0.419271 (-0.024421) | 0.058412 / 0.043533 (0.014879) | 0.361514 / 0.255139 (0.106375) | 0.399121 / 0.283200 (0.115921) | 0.117927 / 0.141683 (-0.023756) | 1.791499 / 1.452155 (0.339344) | 1.889000 / 1.492716 (0.396284) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.253324 / 0.018006 (0.235318) | 0.536151 / 0.000490 (0.535661) | 0.010450 / 0.000200 (0.010250) | 0.000171 / 0.000054 (0.000117) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034646 / 0.037411 (-0.002765) | 0.145999 / 0.014526 (0.131473) | 0.153793 / 0.176557 (-0.022763) | 0.232871 / 0.737135 (-0.504265) | 0.161151 / 0.296338 (-0.135188) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.471407 / 0.215209 (0.256197) | 4.715702 / 2.077655 (2.638047) | 2.228939 / 1.504120 (0.724819) | 2.008511 / 1.541195 (0.467317) | 2.135182 / 1.468490 (0.666692) | 0.620720 / 4.584777 (-3.964057) | 4.960731 / 3.745712 (1.215019) | 2.222469 / 5.269862 (-3.047393) | 1.284467 / 4.565676 (-3.281209) | 0.077931 / 0.424275 (-0.346344) | 0.013935 / 0.007607 (0.006328) | 0.593164 / 0.226044 (0.367120) | 5.940829 / 2.268929 (3.671900) | 2.664277 / 55.444624 (-52.780347) | 2.290655 / 6.876477 (-4.585822) | 2.496664 / 2.142072 (0.354592) | 0.759166 / 4.805227 (-4.046061) | 0.168011 / 6.500664 (-6.332653) | 0.077993 / 0.075469 (0.002524) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.440663 / 1.841788 (-0.401125) | 19.105377 / 8.074308 (11.031069) | 16.068118 / 10.191392 (5.876726) | 0.193024 / 0.680424 (-0.487400) | 0.022348 / 0.534201 (-0.511853) | 0.517454 / 0.579283 (-0.061829) | 0.528072 / 0.434364 (0.093708) | 0.565293 / 0.540337 (0.024955) | 0.676578 / 1.386936 (-0.710358) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008089 / 0.011353 (-0.003264) | 0.005287 / 0.011008 (-0.005721) | 0.087964 / 0.038508 (0.049456) | 0.041548 / 0.023109 (0.018439) | 0.437733 / 0.275898 (0.161835) | 0.487878 / 0.323480 (0.164398) | 0.006898 / 0.007986 (-0.001087) | 0.004649 / 0.004328 (0.000320) | 0.086982 / 0.004250 (0.082732) | 0.056874 / 0.037052 (0.019822) | 0.437397 / 0.258489 (0.178908) | 0.490636 / 0.293841 (0.196795) | 0.033550 / 0.128546 (-0.094997) | 0.010430 / 0.075646 (-0.065216) | 0.096076 / 0.419271 (-0.323196) | 0.054028 / 0.043533 (0.010495) | 0.450262 / 0.255139 (0.195123) | 0.465566 / 0.283200 (0.182366) | 0.119987 / 0.141683 (-0.021696) | 1.764428 / 1.452155 (0.312273) | 1.841547 / 1.492716 (0.348831) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.271427 / 0.018006 (0.253420) | 0.506386 / 0.000490 (0.505896) | 0.001213 / 0.000200 (0.001013) | 0.000125 / 0.000054 (0.000070) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036159 / 0.037411 (-0.001253) | 0.140578 / 0.014526 (0.126053) | 0.147517 / 0.176557 (-0.029040) | 0.206215 / 0.737135 (-0.530921) | 0.152560 / 0.296338 (-0.143779) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.522833 / 0.215209 (0.307624) | 5.215732 / 2.077655 (3.138077) | 2.553406 / 1.504120 (1.049286) | 2.344815 / 1.541195 (0.803620) | 2.422377 / 1.468490 (0.953886) | 0.631197 / 4.584777 (-3.953580) | 4.906216 / 3.745712 (1.160504) | 2.212923 / 5.269862 (-3.056938) | 1.352937 / 4.565676 (-3.212740) | 0.079141 / 0.424275 (-0.345135) | 0.013691 / 0.007607 (0.006084) | 0.634939 / 0.226044 (0.408895) | 6.578770 / 2.268929 (4.309842) | 3.080339 / 55.444624 (-52.364286) | 2.710243 / 6.876477 (-4.166234) | 2.740476 / 2.142072 (0.598404) | 0.783610 / 4.805227 (-4.021617) | 0.171589 / 6.500664 (-6.329075) | 0.077311 / 0.075469 (0.001842) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.584847 / 1.841788 (-0.256941) | 19.510132 / 8.074308 (11.435824) | 18.074572 / 10.191392 (7.883180) | 0.173494 / 0.680424 (-0.506930) | 0.021149 / 0.534201 (-0.513052) | 0.469026 / 0.579283 (-0.110258) | 0.518463 / 0.434364 (0.084099) | 0.550363 / 0.540337 (0.010026) | 0.667087 / 1.386936 (-0.719849) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007144 / 0.011353 (-0.004209) | 0.004783 / 0.011008 (-0.006225) | 0.103991 / 0.038508 (0.065483) | 0.039098 / 0.023109 (0.015989) | 0.319851 / 0.275898 (0.043952) | 0.356104 / 0.323480 (0.032625) | 0.007077 / 0.007986 (-0.000909) | 0.004188 / 0.004328 (-0.000141) | 0.078360 / 0.004250 (0.074109) | 0.050951 / 0.037052 (0.013899) | 0.321791 / 0.258489 (0.063302) | 0.356123 / 0.293841 (0.062283) | 0.028967 / 0.128546 (-0.099579) | 0.009091 / 0.075646 (-0.066555) | 0.355265 / 0.419271 (-0.064007) | 0.052521 / 0.043533 (0.008988) | 0.317333 / 0.255139 (0.062194) | 0.340747 / 0.283200 (0.057547) | 0.104354 / 0.141683 (-0.037329) | 1.522791 / 1.452155 (0.070636) | 1.579835 / 1.492716 (0.087118) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.260539 / 0.018006 (0.242532) | 0.454230 / 0.000490 (0.453740) | 0.036588 / 0.000200 (0.036388) | 0.000289 / 0.000054 (0.000235) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028375 / 0.037411 (-0.009036) | 0.118939 / 0.014526 (0.104413) | 0.126553 / 0.176557 (-0.050004) | 0.184596 / 0.737135 (-0.552539) | 0.130583 / 0.296338 (-0.165755) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417353 / 0.215209 (0.202144) | 4.171595 / 2.077655 (2.093940) | 1.855096 / 1.504120 (0.350976) | 1.673941 / 1.541195 (0.132747) | 1.761370 / 1.468490 (0.292880) | 0.544081 / 4.584777 (-4.040696) | 3.851877 / 3.745712 (0.106165) | 1.896661 / 5.269862 (-3.373200) | 1.093303 / 4.565676 (-3.472373) | 0.067967 / 0.424275 (-0.356308) | 0.012313 / 0.007607 (0.004706) | 0.532316 / 0.226044 (0.306272) | 5.336016 / 2.268929 (3.067087) | 2.344780 / 55.444624 (-53.099845) | 1.993909 / 6.876477 (-4.882568) | 2.167324 / 2.142072 (0.025251) | 0.670334 / 4.805227 (-4.134893) | 0.147705 / 6.500664 (-6.352959) | 0.067634 / 0.075469 (-0.007835) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.251005 / 1.841788 (-0.590783) | 15.405531 / 8.074308 (7.331223) | 14.197019 / 10.191392 (4.005627) | 0.144230 / 0.680424 (-0.536193) | 0.018352 / 0.534201 (-0.515849) | 0.427536 / 0.579283 (-0.151748) | 0.433135 / 0.434364 (-0.001229) | 0.502624 / 0.540337 (-0.037713) | 0.612312 / 1.386936 (-0.774624) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007011 / 0.011353 (-0.004342) | 0.004857 / 0.011008 (-0.006151) | 0.077797 / 0.038508 (0.039289) | 0.035411 / 0.023109 (0.012302) | 0.368234 / 0.275898 (0.092336) | 0.408359 / 0.323480 (0.084879) | 0.005883 / 0.007986 (-0.002102) | 0.004311 / 0.004328 (-0.000017) | 0.077216 / 0.004250 (0.072966) | 0.052062 / 0.037052 (0.015010) | 0.368502 / 0.258489 (0.110013) | 0.428681 / 0.293841 (0.134840) | 0.028889 / 0.128546 (-0.099657) | 0.009146 / 0.075646 (-0.066501) | 0.085515 / 0.419271 (-0.333756) | 0.050216 / 0.043533 (0.006683) | 0.359562 / 0.255139 (0.104423) | 0.378335 / 0.283200 (0.095135) | 0.106351 / 0.141683 (-0.035332) | 1.538943 / 1.452155 (0.086788) | 1.663572 / 1.492716 (0.170855) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216917 / 0.018006 (0.198911) | 0.444130 / 0.000490 (0.443641) | 0.002640 / 0.000200 (0.002440) | 0.000093 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032509 / 0.037411 (-0.004902) | 0.123955 / 0.014526 (0.109430) | 0.133236 / 0.176557 (-0.043321) | 0.187408 / 0.737135 (-0.549727) | 0.136696 / 0.296338 (-0.159643) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.443714 / 0.215209 (0.228505) | 4.416973 / 2.077655 (2.339318) | 2.145279 / 1.504120 (0.641159) | 1.946669 / 1.541195 (0.405474) | 2.044105 / 1.468490 (0.575614) | 0.534463 / 4.584777 (-4.050314) | 3.824926 / 3.745712 (0.079214) | 3.151796 / 5.269862 (-2.118066) | 1.497513 / 4.565676 (-3.068164) | 0.066799 / 0.424275 (-0.357476) | 0.012408 / 0.007607 (0.004801) | 0.544182 / 0.226044 (0.318138) | 5.419403 / 2.268929 (3.150474) | 2.605191 / 55.444624 (-52.839433) | 2.285354 / 6.876477 (-4.591123) | 2.359520 / 2.142072 (0.217448) | 0.655489 / 4.805227 (-4.149738) | 0.143496 / 6.500664 (-6.357168) | 0.066782 / 0.075469 (-0.008687) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.329370 / 1.841788 (-0.512418) | 16.058019 / 8.074308 (7.983711) | 15.119769 / 10.191392 (4.928377) | 0.147967 / 0.680424 (-0.532457) | 0.018360 / 0.534201 (-0.515841) | 0.436847 / 0.579283 (-0.142436) | 0.435136 / 0.434364 (0.000773) | 0.507176 / 0.540337 (-0.033161) | 0.610627 / 1.386936 (-0.776309) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006425 / 0.011353 (-0.004927) | 0.003710 / 0.011008 (-0.007298) | 0.102072 / 0.038508 (0.063564) | 0.033974 / 0.023109 (0.010865) | 0.273146 / 0.275898 (-0.002752) | 0.313254 / 0.323480 (-0.010226) | 0.004889 / 0.007986 (-0.003096) | 0.004803 / 0.004328 (0.000475) | 0.067359 / 0.004250 (0.063109) | 0.040281 / 0.037052 (0.003228) | 0.302106 / 0.258489 (0.043617) | 0.318039 / 0.293841 (0.024198) | 0.028839 / 0.128546 (-0.099707) | 0.008726 / 0.075646 (-0.066921) | 0.322532 / 0.419271 (-0.096739) | 0.048845 / 0.043533 (0.005312) | 0.299836 / 0.255139 (0.044697) | 0.300983 / 0.283200 (0.017784) | 0.103384 / 0.141683 (-0.038299) | 1.417245 / 1.452155 (-0.034910) | 1.538819 / 1.492716 (0.046102) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219798 / 0.018006 (0.201792) | 0.442297 / 0.000490 (0.441807) | 0.013792 / 0.000200 (0.013592) | 0.000101 / 0.000054 (0.000046) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024996 / 0.037411 (-0.012416) | 0.098558 / 0.014526 (0.084032) | 0.116423 / 0.176557 (-0.060133) | 0.163481 / 0.737135 (-0.573654) | 0.115031 / 0.296338 (-0.181308) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.392411 / 0.215209 (0.177202) | 4.025992 / 2.077655 (1.948337) | 1.850809 / 1.504120 (0.346690) | 1.668330 / 1.541195 (0.127136) | 1.627041 / 1.468490 (0.158551) | 0.510721 / 4.584777 (-4.074055) | 3.841318 / 3.745712 (0.095606) | 3.416979 / 5.269862 (-1.852883) | 1.640796 / 4.565676 (-2.924880) | 0.061968 / 0.424275 (-0.362307) | 0.010281 / 0.007607 (0.002674) | 0.485592 / 0.226044 (0.259548) | 4.872205 / 2.268929 (2.603277) | 2.146753 / 55.444624 (-53.297871) | 1.832087 / 6.876477 (-5.044390) | 1.920928 / 2.142072 (-0.221144) | 0.606363 / 4.805227 (-4.198864) | 0.134351 / 6.500664 (-6.366313) | 0.057583 / 0.075469 (-0.017886) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.153048 / 1.841788 (-0.688739) | 14.165743 / 8.074308 (6.091435) | 12.237798 / 10.191392 (2.046406) | 0.159815 / 0.680424 (-0.520608) | 0.018226 / 0.534201 (-0.515975) | 0.372390 / 0.579283 (-0.206893) | 0.396552 / 0.434364 (-0.037811) | 0.439445 / 0.540337 (-0.100892) | 0.521924 / 1.386936 (-0.865012) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006162 / 0.011353 (-0.005191) | 0.004006 / 0.011008 (-0.007002) | 0.067226 / 0.038508 (0.028718) | 0.030285 / 0.023109 (0.007176) | 0.361220 / 0.275898 (0.085322) | 0.386783 / 0.323480 (0.063303) | 0.005202 / 0.007986 (-0.002784) | 0.003453 / 0.004328 (-0.000876) | 0.068299 / 0.004250 (0.064048) | 0.041433 / 0.037052 (0.004381) | 0.360222 / 0.258489 (0.101733) | 0.399327 / 0.293841 (0.105486) | 0.026066 / 0.128546 (-0.102480) | 0.008025 / 0.075646 (-0.067621) | 0.079588 / 0.419271 (-0.339683) | 0.042616 / 0.043533 (-0.000917) | 0.347639 / 0.255139 (0.092500) | 0.386092 / 0.283200 (0.102893) | 0.100869 / 0.141683 (-0.040814) | 1.386901 / 1.452155 (-0.065254) | 1.471523 / 1.492716 (-0.021193) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.217020 / 0.018006 (0.199014) | 0.431033 / 0.000490 (0.430543) | 0.002902 / 0.000200 (0.002702) | 0.000092 / 0.000054 (0.000037) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027396 / 0.037411 (-0.010015) | 0.114154 / 0.014526 (0.099629) | 0.117918 / 0.176557 (-0.058638) | 0.173342 / 0.737135 (-0.563794) | 0.125812 / 0.296338 (-0.170526) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.424843 / 0.215209 (0.209634) | 4.324828 / 2.077655 (2.247174) | 2.188263 / 1.504120 (0.684143) | 1.912288 / 1.541195 (0.371094) | 2.011621 / 1.468490 (0.543131) | 0.560944 / 4.584777 (-4.023833) | 3.975047 / 3.745712 (0.229335) | 3.130242 / 5.269862 (-2.139619) | 1.667902 / 4.565676 (-2.897775) | 0.062245 / 0.424275 (-0.362030) | 0.011300 / 0.007607 (0.003692) | 0.498571 / 0.226044 (0.272527) | 5.024887 / 2.268929 (2.755958) | 2.482967 / 55.444624 (-52.961657) | 2.216125 / 6.876477 (-4.660352) | 2.175856 / 2.142072 (0.033783) | 0.615207 / 4.805227 (-4.190021) | 0.133808 / 6.500664 (-6.366856) | 0.058681 / 0.075469 (-0.016788) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.370150 / 1.841788 (-0.471637) | 14.580907 / 8.074308 (6.506599) | 14.209955 / 10.191392 (4.018563) | 0.139738 / 0.680424 (-0.540686) | 0.018722 / 0.534201 (-0.515479) | 0.375755 / 0.579283 (-0.203528) | 0.428335 / 0.434364 (-0.006029) | 0.438957 / 0.540337 (-0.101380) | 0.541130 / 1.386936 (-0.845806) |\n\n</details>\n</details>\n\n\n"
] | 2023-05-15T15:28:34Z
| 2023-06-08T16:40:18Z
| 2023-06-08T16:32:51Z
|
MEMBER
| null | null | null |
This PR tries out a new approach to generating the index tensor in `to_tf_dataset`, which should reduce memory usage for very large datasets. I'll need to do some testing before merging it!
Fixes #5855
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Rocketknight1",
"id": 12866554,
"login": "Rocketknight1",
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Rocketknight1",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5863/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5863/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5863.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5863",
"merged_at": "2023-06-08T16:32:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5863.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5863"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7058
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7058/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7058/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7058/events
|
https://github.com/huggingface/datasets/issues/7058
| 2,422,560,355
|
I_kwDODunzps6QZVZj
| 7,058
|
New feature type: Document
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2024-07-22T10:49:20Z
| 2024-07-22T10:49:20Z
| null |
COLLABORATOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
It would be useful for PDF.
https://github.com/huggingface/dataset-viewer/issues/2991#issuecomment-2242656069
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7058/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7058/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6729
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6729/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6729/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6729/events
|
https://github.com/huggingface/datasets/issues/6729
| 2,180,237,159
|
I_kwDODunzps6B88dn
| 6,729
|
Support zipfiles that span multiple disks?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "d876e3",
"default": true,
"description": "Further information is requested",
"id": 1935892912,
"name": "question",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question"
}
] |
closed
| false
| null |
[] | null |
[
"@severo were you able to solve it?",
"No. cc @albertvillanova @lhoestq @polinaeterna for an evaluation of what it would take to support this feature.",
"The underlying issue issue is that the dataset repository has used split ZIP archive files: https://huggingface.co/datasets/PhilEO-community/PhilEO-downstream/tree/main/data\r\n```\r\ndownstream_dataset_patches_npzip.z01\r\ndownstream_dataset_patches_npzip.z02\r\n...\r\ndownstream_dataset_patches_npzip.zip\r\n```\r\nand these are not supported by the Python standard library package `zipfile`.",
"It's a pretty bad way to share a dataset since one needs to download the full dataset to use it.\r\n\r\nWe likely won't support this format.",
"I agree it is a format we maybe should not support: streaming is not possible.",
"I opened a PR in the reported repo to disable the viewer: https://huggingface.co/datasets/PhilEO-community/PhilEO-downstream/discussions/1"
] | 2024-03-11T21:07:41Z
| 2024-06-26T05:08:59Z
| 2024-06-26T05:05:28Z
|
COLLABORATOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
See https://huggingface.co/datasets/PhilEO-community/PhilEO-downstream
The dataset viewer gives the following error:
```
Error code: ConfigNamesError
Exception: BadZipFile
Message: zipfiles that span multiple disks are not supported
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 67, in compute_config_names_response
get_dataset_config_names(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 347, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1871, in dataset_module_factory
raise e1 from None
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1846, in dataset_module_factory
return HubDatasetModuleFactoryWithoutScript(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1240, in get_module
module_name, default_builder_kwargs = infer_module_for_data_files(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 584, in infer_module_for_data_files
split_modules = {
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 585, in <dictcomp>
split: infer_module_for_data_files_list(data_files_list, download_config=download_config)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 526, in infer_module_for_data_files_list
return infer_module_for_data_files_list_in_archives(data_files_list, download_config=download_config)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 554, in infer_module_for_data_files_list_in_archives
for f in xglob(extracted, recursive=True, download_config=download_config)[
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 576, in xglob
fs, *_ = fsspec.get_fs_token_paths(urlpath, storage_options=storage_options)
File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 622, in get_fs_token_paths
fs = filesystem(protocol, **inkwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/registry.py", line 290, in filesystem
return cls(**storage_options)
File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 79, in __call__
obj = super().__call__(*args, **kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/zip.py", line 57, in __init__
self.zip = zipfile.ZipFile(
File "/usr/local/lib/python3.9/zipfile.py", line 1266, in __init__
self._RealGetContents()
File "/usr/local/lib/python3.9/zipfile.py", line 1329, in _RealGetContents
endrec = _EndRecData(fp)
File "/usr/local/lib/python3.9/zipfile.py", line 286, in _EndRecData
return _EndRecData64(fpin, -sizeEndCentDir, endrec)
File "/usr/local/lib/python3.9/zipfile.py", line 232, in _EndRecData64
raise BadZipFile("zipfiles that span multiple disks are not supported")
zipfile.BadZipFile: zipfiles that span multiple disks are not supported
```
The files (https://huggingface.co/datasets/PhilEO-community/PhilEO-downstream/tree/main/data) are:
<img width="629" alt="Capture d’écran 2024-03-11 à 22 07 30" src="https://github.com/huggingface/datasets/assets/1676121/0bb15a51-d54f-4d73-8572-e427ea644b36">
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6729/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6729/timeline
| null |
not_planned
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6160
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6160/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6160/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6160/events
|
https://github.com/huggingface/datasets/pull/6160
| 1,855,760,543
|
PR_kwDODunzps5YMtLQ
| 6,160
|
Fix Parquet loading with `columns`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008368 / 0.011353 (-0.002985) | 0.004754 / 0.011008 (-0.006254) | 0.096646 / 0.038508 (0.058138) | 0.088980 / 0.023109 (0.065871) | 0.374532 / 0.275898 (0.098633) | 0.404840 / 0.323480 (0.081360) | 0.006026 / 0.007986 (-0.001960) | 0.005716 / 0.004328 (0.001387) | 0.076297 / 0.004250 (0.072047) | 0.072335 / 0.037052 (0.035283) | 0.379435 / 0.258489 (0.120946) | 0.423449 / 0.293841 (0.129608) | 0.041344 / 0.128546 (-0.087202) | 0.009758 / 0.075646 (-0.065889) | 0.341550 / 0.419271 (-0.077721) | 0.068559 / 0.043533 (0.025026) | 0.368313 / 0.255139 (0.113174) | 0.415147 / 0.283200 (0.131947) | 0.028692 / 0.141683 (-0.112990) | 1.816198 / 1.452155 (0.364044) | 1.983351 / 1.492716 (0.490635) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222712 / 0.018006 (0.204706) | 0.517850 / 0.000490 (0.517360) | 0.004436 / 0.000200 (0.004236) | 0.000094 / 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033168 / 0.037411 (-0.004243) | 0.101353 / 0.014526 (0.086827) | 0.113235 / 0.176557 (-0.063322) | 0.180308 / 0.737135 (-0.556827) | 0.114604 / 0.296338 (-0.181734) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.454415 / 0.215209 (0.239206) | 4.500355 / 2.077655 (2.422701) | 2.188223 / 1.504120 (0.684103) | 1.974256 / 1.541195 (0.433061) | 2.067331 / 1.468490 (0.598841) | 0.572982 / 4.584777 (-4.011795) | 4.239160 / 3.745712 (0.493448) | 3.836812 / 5.269862 (-1.433049) | 2.367022 / 4.565676 (-2.198655) | 0.066886 / 0.424275 (-0.357389) | 0.009111 / 0.007607 (0.001504) | 0.539881 / 0.226044 (0.313837) | 5.362247 / 2.268929 (3.093319) | 2.784044 / 55.444624 (-52.660580) | 2.320975 / 6.876477 (-4.555502) | 2.543108 / 2.142072 (0.401036) | 0.685751 / 4.805227 (-4.119477) | 0.156840 / 6.500664 (-6.343824) | 0.071764 / 0.075469 (-0.003705) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.549830 / 1.841788 (-0.291958) | 22.799622 / 8.074308 (14.725314) | 16.750692 / 10.191392 (6.559300) | 0.196192 / 0.680424 (-0.484232) | 0.024518 / 0.534201 (-0.509683) | 0.479302 / 0.579283 (-0.099981) | 0.522256 / 0.434364 (0.087892) | 0.545809 / 0.540337 (0.005471) | 0.748437 / 1.386936 (-0.638499) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007789 / 0.011353 (-0.003564) | 0.004563 / 0.011008 (-0.006445) | 0.074631 / 0.038508 (0.036123) | 0.086892 / 0.023109 (0.063783) | 0.427014 / 0.275898 (0.151116) | 0.463257 / 0.323480 (0.139777) | 0.005987 / 0.007986 (-0.001999) | 0.003803 / 0.004328 (-0.000526) | 0.074799 / 0.004250 (0.070549) | 0.063473 / 0.037052 (0.026420) | 0.429905 / 0.258489 (0.171416) | 0.468967 / 0.293841 (0.175127) | 0.036768 / 0.128546 (-0.091778) | 0.009675 / 0.075646 (-0.065971) | 0.082546 / 0.419271 (-0.336725) | 0.058027 / 0.043533 (0.014494) | 0.429813 / 0.255139 (0.174674) | 0.449200 / 0.283200 (0.166001) | 0.026713 / 0.141683 (-0.114969) | 1.812022 / 1.452155 (0.359867) | 1.847305 / 1.492716 (0.354589) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.320383 / 0.018006 (0.302377) | 0.485995 / 0.000490 (0.485505) | 0.024365 / 0.000200 (0.024165) | 0.000156 / 0.000054 (0.000101) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036341 / 0.037411 (-0.001071) | 0.104635 / 0.014526 (0.090110) | 0.119456 / 0.176557 (-0.057101) | 0.182042 / 0.737135 (-0.555093) | 0.118944 / 0.296338 (-0.177395) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.506410 / 0.215209 (0.291201) | 5.061119 / 2.077655 (2.983465) | 2.756557 / 1.504120 (1.252437) | 2.546504 / 1.541195 (1.005309) | 2.585509 / 1.468490 (1.117019) | 0.564291 / 4.584777 (-4.020486) | 4.281219 / 3.745712 (0.535507) | 3.919439 / 5.269862 (-1.350423) | 2.588788 / 4.565676 (-1.976889) | 0.066900 / 0.424275 (-0.357375) | 0.008680 / 0.007607 (0.001073) | 0.598435 / 0.226044 (0.372390) | 5.976054 / 2.268929 (3.707125) | 3.260211 / 55.444624 (-52.184414) | 2.874597 / 6.876477 (-4.001880) | 3.105769 / 2.142072 (0.963697) | 0.692938 / 4.805227 (-4.112289) | 0.157777 / 6.500664 (-6.342887) | 0.073128 / 0.075469 (-0.002341) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.559380 / 1.841788 (-0.282408) | 22.986540 / 8.074308 (14.912232) | 16.305564 / 10.191392 (6.114172) | 0.174939 / 0.680424 (-0.505485) | 0.021932 / 0.534201 (-0.512269) | 0.468162 / 0.579283 (-0.111121) | 0.472610 / 0.434364 (0.038246) | 0.574574 / 0.540337 (0.034237) | 0.783505 / 1.386936 (-0.603431) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012553 / 0.011353 (0.001201) | 0.005358 / 0.011008 (-0.005650) | 0.108338 / 0.038508 (0.069830) | 0.101105 / 0.023109 (0.077995) | 0.416808 / 0.275898 (0.140910) | 0.454599 / 0.323480 (0.131119) | 0.006665 / 0.007986 (-0.001321) | 0.004186 / 0.004328 (-0.000143) | 0.084900 / 0.004250 (0.080649) | 0.062881 / 0.037052 (0.025829) | 0.424423 / 0.258489 (0.165934) | 0.482651 / 0.293841 (0.188810) | 0.055740 / 0.128546 (-0.072807) | 0.014469 / 0.075646 (-0.061177) | 0.383267 / 0.419271 (-0.036005) | 0.067487 / 0.043533 (0.023955) | 0.414983 / 0.255139 (0.159844) | 0.459437 / 0.283200 (0.176237) | 0.038679 / 0.141683 (-0.103004) | 1.828002 / 1.452155 (0.375847) | 1.951946 / 1.492716 (0.459230) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.288033 / 0.018006 (0.270027) | 0.603536 / 0.000490 (0.603046) | 0.004874 / 0.000200 (0.004674) | 0.000138 / 0.000054 (0.000084) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031988 / 0.037411 (-0.005423) | 0.095807 / 0.014526 (0.081281) | 0.113459 / 0.176557 (-0.063098) | 0.182012 / 0.737135 (-0.555123) | 0.113121 / 0.296338 (-0.183217) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.620709 / 0.215209 (0.405500) | 6.096569 / 2.077655 (4.018915) | 2.754612 / 1.504120 (1.250492) | 2.449786 / 1.541195 (0.908591) | 2.470694 / 1.468490 (1.002204) | 0.837016 / 4.584777 (-3.747761) | 5.237290 / 3.745712 (1.491578) | 4.713220 / 5.269862 (-0.556642) | 3.020934 / 4.565676 (-1.544743) | 0.096892 / 0.424275 (-0.327383) | 0.009423 / 0.007607 (0.001816) | 0.720313 / 0.226044 (0.494269) | 7.369673 / 2.268929 (5.100744) | 3.550384 / 55.444624 (-51.894241) | 2.868868 / 6.876477 (-4.007609) | 3.081469 / 2.142072 (0.939397) | 1.042968 / 4.805227 (-3.762259) | 0.232530 / 6.500664 (-6.268134) | 0.080805 / 0.075469 (0.005336) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.645777 / 1.841788 (-0.196011) | 24.590862 / 8.074308 (16.516554) | 21.315496 / 10.191392 (11.124104) | 0.228796 / 0.680424 (-0.451628) | 0.028479 / 0.534201 (-0.505722) | 0.494413 / 0.579283 (-0.084870) | 0.582773 / 0.434364 (0.148409) | 0.552575 / 0.540337 (0.012238) | 0.787217 / 1.386936 (-0.599719) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008743 / 0.011353 (-0.002609) | 0.005253 / 0.011008 (-0.005755) | 0.083766 / 0.038508 (0.045257) | 0.086305 / 0.023109 (0.063195) | 0.520171 / 0.275898 (0.244273) | 0.565812 / 0.323480 (0.242332) | 0.006465 / 0.007986 (-0.001520) | 0.004585 / 0.004328 (0.000257) | 0.085344 / 0.004250 (0.081094) | 0.063418 / 0.037052 (0.026366) | 0.519759 / 0.258489 (0.261270) | 0.552770 / 0.293841 (0.258929) | 0.049439 / 0.128546 (-0.079107) | 0.017564 / 0.075646 (-0.058082) | 0.092713 / 0.419271 (-0.326559) | 0.065837 / 0.043533 (0.022305) | 0.516133 / 0.255139 (0.260994) | 0.539813 / 0.283200 (0.256613) | 0.036531 / 0.141683 (-0.105152) | 1.919275 / 1.452155 (0.467121) | 2.039987 / 1.492716 (0.547271) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.297978 / 0.018006 (0.279972) | 0.608243 / 0.000490 (0.607753) | 0.006611 / 0.000200 (0.006411) | 0.000117 / 0.000054 (0.000062) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033909 / 0.037411 (-0.003503) | 0.106370 / 0.014526 (0.091844) | 0.119032 / 0.176557 (-0.057524) | 0.180319 / 0.737135 (-0.556816) | 0.122826 / 0.296338 (-0.173513) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.639265 / 0.215209 (0.424056) | 6.248430 / 2.077655 (4.170775) | 2.944760 / 1.504120 (1.440640) | 2.654005 / 1.541195 (1.112811) | 2.733625 / 1.468490 (1.265134) | 0.837172 / 4.584777 (-3.747605) | 5.245084 / 3.745712 (1.499372) | 4.722614 / 5.269862 (-0.547248) | 3.008286 / 4.565676 (-1.557391) | 0.102340 / 0.424275 (-0.321935) | 0.009433 / 0.007607 (0.001826) | 0.762991 / 0.226044 (0.536946) | 7.385020 / 2.268929 (5.116092) | 3.787648 / 55.444624 (-51.656977) | 3.234345 / 6.876477 (-3.642132) | 3.394444 / 2.142072 (1.252371) | 1.023472 / 4.805227 (-3.781756) | 0.208199 / 6.500664 (-6.292465) | 0.081513 / 0.075469 (0.006043) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.795864 / 1.841788 (-0.045923) | 25.270852 / 8.074308 (17.196544) | 23.356413 / 10.191392 (13.165021) | 0.228002 / 0.680424 (-0.452422) | 0.031851 / 0.534201 (-0.502350) | 0.499424 / 0.579283 (-0.079859) | 0.588027 / 0.434364 (0.153664) | 0.581746 / 0.540337 (0.041408) | 0.814183 / 1.386936 (-0.572753) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006477 / 0.011353 (-0.004876) | 0.003878 / 0.011008 (-0.007130) | 0.084085 / 0.038508 (0.045577) | 0.071297 / 0.023109 (0.048188) | 0.309176 / 0.275898 (0.033278) | 0.342830 / 0.323480 (0.019350) | 0.005189 / 0.007986 (-0.002796) | 0.003263 / 0.004328 (-0.001065) | 0.063920 / 0.004250 (0.059670) | 0.052233 / 0.037052 (0.015180) | 0.324830 / 0.258489 (0.066341) | 0.357956 / 0.293841 (0.064115) | 0.030459 / 0.128546 (-0.098087) | 0.008350 / 0.075646 (-0.067297) | 0.287330 / 0.419271 (-0.131942) | 0.051005 / 0.043533 (0.007473) | 0.309227 / 0.255139 (0.054088) | 0.346184 / 0.283200 (0.062984) | 0.023961 / 0.141683 (-0.117722) | 1.463983 / 1.452155 (0.011829) | 1.573036 / 1.492716 (0.080319) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.205653 / 0.018006 (0.187647) | 0.457336 / 0.000490 (0.456846) | 0.005347 / 0.000200 (0.005147) | 0.000079 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028080 / 0.037411 (-0.009332) | 0.081755 / 0.014526 (0.067229) | 0.095716 / 0.176557 (-0.080841) | 0.151340 / 0.737135 (-0.585795) | 0.097174 / 0.296338 (-0.199164) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.390725 / 0.215209 (0.175516) | 3.899114 / 2.077655 (1.821459) | 1.895352 / 1.504120 (0.391232) | 1.716072 / 1.541195 (0.174877) | 1.784952 / 1.468490 (0.316462) | 0.477247 / 4.584777 (-4.107530) | 3.606641 / 3.745712 (-0.139071) | 3.203337 / 5.269862 (-2.066524) | 2.017003 / 4.565676 (-2.548674) | 0.056182 / 0.424275 (-0.368094) | 0.007508 / 0.007607 (-0.000099) | 0.461965 / 0.226044 (0.235921) | 4.605926 / 2.268929 (2.336997) | 2.466695 / 55.444624 (-52.977929) | 2.136376 / 6.876477 (-4.740100) | 2.277334 / 2.142072 (0.135261) | 0.576119 / 4.805227 (-4.229109) | 0.131497 / 6.500664 (-6.369167) | 0.060068 / 0.075469 (-0.015401) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.262681 / 1.841788 (-0.579107) | 19.411572 / 8.074308 (11.337264) | 14.383421 / 10.191392 (4.192029) | 0.166115 / 0.680424 (-0.514308) | 0.018366 / 0.534201 (-0.515835) | 0.393903 / 0.579283 (-0.185380) | 0.408788 / 0.434364 (-0.025576) | 0.461796 / 0.540337 (-0.078541) | 0.628460 / 1.386936 (-0.758476) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006501 / 0.011353 (-0.004852) | 0.003915 / 0.011008 (-0.007093) | 0.065245 / 0.038508 (0.026737) | 0.073146 / 0.023109 (0.050037) | 0.363537 / 0.275898 (0.087639) | 0.391571 / 0.323480 (0.068092) | 0.005181 / 0.007986 (-0.002805) | 0.003272 / 0.004328 (-0.001056) | 0.065060 / 0.004250 (0.060810) | 0.054302 / 0.037052 (0.017249) | 0.361571 / 0.258489 (0.103082) | 0.400221 / 0.293841 (0.106380) | 0.030762 / 0.128546 (-0.097784) | 0.008449 / 0.075646 (-0.067197) | 0.071148 / 0.419271 (-0.348123) | 0.048111 / 0.043533 (0.004578) | 0.360327 / 0.255139 (0.105188) | 0.379073 / 0.283200 (0.095874) | 0.024367 / 0.141683 (-0.117316) | 1.451080 / 1.452155 (-0.001074) | 1.510818 / 1.492716 (0.018102) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.267078 / 0.018006 (0.249072) | 0.454074 / 0.000490 (0.453584) | 0.015055 / 0.000200 (0.014855) | 0.000129 / 0.000054 (0.000075) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030916 / 0.037411 (-0.006496) | 0.089212 / 0.014526 (0.074686) | 0.100005 / 0.176557 (-0.076552) | 0.155100 / 0.737135 (-0.582035) | 0.101759 / 0.296338 (-0.194580) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.412826 / 0.215209 (0.197616) | 4.122520 / 2.077655 (2.044865) | 2.107870 / 1.504120 (0.603750) | 1.911936 / 1.541195 (0.370741) | 1.984936 / 1.468490 (0.516446) | 0.483835 / 4.584777 (-4.100942) | 3.641860 / 3.745712 (-0.103852) | 3.220540 / 5.269862 (-2.049322) | 2.015521 / 4.565676 (-2.550155) | 0.056913 / 0.424275 (-0.367362) | 0.007285 / 0.007607 (-0.000322) | 0.484886 / 0.226044 (0.258842) | 4.854734 / 2.268929 (2.585805) | 2.593550 / 55.444624 (-52.851074) | 2.233904 / 6.876477 (-4.642572) | 2.438858 / 2.142072 (0.296785) | 0.580880 / 4.805227 (-4.224347) | 0.133891 / 6.500664 (-6.366773) | 0.061678 / 0.075469 (-0.013791) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.336843 / 1.841788 (-0.504944) | 19.731571 / 8.074308 (11.657263) | 14.290228 / 10.191392 (4.098836) | 0.167635 / 0.680424 (-0.512789) | 0.018767 / 0.534201 (-0.515434) | 0.394953 / 0.579283 (-0.184330) | 0.407711 / 0.434364 (-0.026653) | 0.472371 / 0.540337 (-0.067966) | 0.655278 / 1.386936 (-0.731658) |\n\n</details>\n</details>\n\n\n"
] | 2023-08-17T21:58:24Z
| 2023-08-17T22:44:59Z
| 2023-08-17T22:36:04Z
|
COLLABORATOR
| null | null | null |
Fix #6149
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6160/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6160/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6160.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6160",
"merged_at": "2023-08-17T22:36:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6160.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6160"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6695
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6695/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6695/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6695/events
|
https://github.com/huggingface/datasets/issues/6695
| 2,154,075,509
|
I_kwDODunzps6AZJV1
| 6,695
|
Support JSON file with an array of strings
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null |
[
"https://huggingface.co/datasets/CausalLM/Refined-Anime-Text/discussions/1 has been fixed, but how can we check if there are other datasets with the same error, in datasets-server's database? I don't know how to get the list of erroneous cache entries, since we only copied `Error code: JobManagerCrashedError`, but not the traceback in `details`... Do you remember the error message, or the underlying exception, we had?"
] | 2024-02-26T12:35:11Z
| 2024-03-08T14:16:25Z
| 2024-02-28T06:39:13Z
|
MEMBER
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
Support loading a dataset from a JSON file with an array of strings.
See: https://huggingface.co/datasets/CausalLM/Refined-Anime-Text/discussions/1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6695/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6695/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6980
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6980/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6980/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6980/events
|
https://github.com/huggingface/datasets/issues/6980
| 2,360,909,930
|
I_kwDODunzps6MuKBq
| 6,980
|
Support NumPy 2.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/730137?v=4",
"events_url": "https://api.github.com/users/NeilGirdhar/events{/privacy}",
"followers_url": "https://api.github.com/users/NeilGirdhar/followers",
"following_url": "https://api.github.com/users/NeilGirdhar/following{/other_user}",
"gists_url": "https://api.github.com/users/NeilGirdhar/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/NeilGirdhar",
"id": 730137,
"login": "NeilGirdhar",
"node_id": "MDQ6VXNlcjczMDEzNw==",
"organizations_url": "https://api.github.com/users/NeilGirdhar/orgs",
"received_events_url": "https://api.github.com/users/NeilGirdhar/received_events",
"repos_url": "https://api.github.com/users/NeilGirdhar/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/NeilGirdhar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NeilGirdhar/subscriptions",
"type": "User",
"url": "https://api.github.com/users/NeilGirdhar",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] | null |
[] | 2024-06-18T23:30:22Z
| 2024-07-12T12:04:54Z
| 2024-07-12T12:04:53Z
|
CONTRIBUTOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
Support NumPy 2.0.
### Motivation
NumPy introduces the Array API, which bridges the gap between machine learning libraries. Many clients of HuggingFace are eager to start using the Array API.
Besides that, NumPy 2 provides a cleaner interface than NumPy 1.
### Tasks
NumPy 2.0 was released for testing so that libraries could ensure compatibility [since mid-March](https://github.com/numpy/numpy/issues/24300#issuecomment-1986815755). What needs to be done for HuggingFace to support Numpy 2?
- [x] Fix use of `array`: https://github.com/huggingface/datasets/pull/6976
- [ ] Remove [NumPy version limit](https://github.com/huggingface/datasets/pull/6975): https://github.com/huggingface/datasets/pull/6991
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6980/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6980/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/4613
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4613/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4613/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4613/events
|
https://github.com/huggingface/datasets/pull/4613
| 1,291,181,193
|
PR_kwDODunzps46skd6
| 4,613
|
Align/fix license metadata info
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/julien-c",
"id": 326577,
"login": "julien-c",
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"repos_url": "https://api.github.com/users/julien-c/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"type": "User",
"url": "https://api.github.com/users/julien-c",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you thank you! Let's merge and pray? 😱 ",
"I just need to add `license_details` to the validator and yup we can merge"
] | 2022-07-01T09:50:50Z
| 2022-07-01T12:53:57Z
| 2022-07-01T12:42:47Z
|
MEMBER
| null | null | null |
fix bad "other-*" licenses and add the corresponding "license_details" when relevant
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4613/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4613/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4613.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4613",
"merged_at": "2022-07-01T12:42:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4613.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4613"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4577
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4577/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4577/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4577/events
|
https://github.com/huggingface/datasets/pull/4577
| 1,285,703,775
|
PR_kwDODunzps46aTWL
| 4,577
|
Add authentication tip to `load_dataset`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-27T12:05:34Z
| 2022-07-04T13:13:15Z
| 2022-07-04T13:01:30Z
|
COLLABORATOR
| null | null | null |
Add an authentication tip similar to the one in transformers' `PreTrainedModel.from_pretrained` to `load_dataset`/`load_dataset_builder`.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4577/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4577/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4577.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4577",
"merged_at": "2022-07-04T13:01:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4577.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4577"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6451
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6451/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6451/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6451/events
|
https://github.com/huggingface/datasets/issues/6451
| 2,010,693,912
|
I_kwDODunzps532MEY
| 6,451
|
Unable to read "marsyas/gtzan" data
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/32300890?v=4",
"events_url": "https://api.github.com/users/gerald-wrona/events{/privacy}",
"followers_url": "https://api.github.com/users/gerald-wrona/followers",
"following_url": "https://api.github.com/users/gerald-wrona/following{/other_user}",
"gists_url": "https://api.github.com/users/gerald-wrona/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gerald-wrona",
"id": 32300890,
"login": "gerald-wrona",
"node_id": "MDQ6VXNlcjMyMzAwODkw",
"organizations_url": "https://api.github.com/users/gerald-wrona/orgs",
"received_events_url": "https://api.github.com/users/gerald-wrona/received_events",
"repos_url": "https://api.github.com/users/gerald-wrona/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gerald-wrona/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gerald-wrona/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gerald-wrona",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi! We've merged a [PR](https://huggingface.co/datasets/marsyas/gtzan/discussions/1) that fixes the script's path logic on Windows.",
"I have transferred the discussion to the corresponding dataset: https://huggingface.co/datasets/marsyas/gtzan/discussions/2\r\n\r\nLet's continue there.",
"@mariosasko @albertvillanova \r\n\r\nThank you both very much for the speedy resolution :)"
] | 2023-11-25T15:13:17Z
| 2023-12-01T12:53:46Z
| 2023-11-27T09:36:25Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
Hi, this is my code and the error:
```
from datasets import load_dataset
gtzan = load_dataset("marsyas/gtzan", "all")
```
[error_trace.txt](https://github.com/huggingface/datasets/files/13464397/error_trace.txt)
[audio_yml.txt](https://github.com/huggingface/datasets/files/13464410/audio_yml.txt)
Python 3.11.5
Jupyter Notebook 6.5.4
Windows 10
I'm able to download and work with other datasets, but not this one. For example, both these below work fine:
```
from datasets import load_dataset
dataset = load_dataset("facebook/voxpopuli", "pl", split="train", streaming=True)
minds = load_dataset("PolyAI/minds14", name="en-US", split="train")
```
Thanks for your help
https://huggingface.co/datasets/marsyas/gtzan/tree/main
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6451/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6451/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/4716
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4716/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4716/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4716/events
|
https://github.com/huggingface/datasets/pull/4716
| 1,309,455,838
|
PR_kwDODunzps47pdbh
| 4,716
|
Support "tags" yaml tag
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"IMO `DatasetMetadata` shouldn't crash with attributes that it doesn't know, btw",
"Yea this PR is mostly to have a validation that this field contains a list of strings.\r\n\r\nRegarding unknown fields, the tagging app currently returns an error if a field is unknown using the `DatasetMetadata`. We can change that though"
] | 2022-07-19T12:34:31Z
| 2022-07-20T13:44:50Z
| 2022-07-20T13:31:56Z
|
MEMBER
| null | null | null |
Added the "tags" YAML tag, so that users can specify data domain/topics keywords for dataset search
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4716/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4716/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4716.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4716",
"merged_at": "2022-07-20T13:31:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4716.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4716"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5145
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5145/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5145/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5145/events
|
https://github.com/huggingface/datasets/issues/5145
| 1,418,005,452
|
I_kwDODunzps5UhQvM
| 5,145
|
Dataset order is not deterministic with ZIP archives and `iter_files`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fxmarty",
"id": 9808326,
"login": "fxmarty",
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fxmarty",
"user_view_type": "public"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null |
[
"Thanks for reporting ! The issue doesn't come from shuffling, but from `beans` row order not being deterministic:\r\n\r\nhttps://huggingface.co/datasets/beans/blob/main/beans.py uses `dl_manager.iter_files` on ZIP archives and the file order doesn't seen to be deterministic and changes across machines",
"Thank you for noticing indeed!",
"This is still a bug, so I'd keep this one open if you don't mind ;)",
"Besides the linked PR, to make the loading process fully deterministic, I believe we should also sort the data files [here](https://github.com/huggingface/datasets/blob/df4bdd365f2abb695f113cbf8856a925bc70901b/src/datasets/data_files.py#L276) and [here](https://github.com/huggingface/datasets/blob/df4bdd365f2abb695f113cbf8856a925bc70901b/src/datasets/data_files.py#L485) (e.g. fsspec's `LocalFileSystem.glob` relies on `os.scandir`, which yields the contents in arbitrary order). My concern is the overhead of these sorts... Maybe we could introduce a new flag to `load_dataset` similar to TFDS' [`shuffle_files`](https://www.tensorflow.org/datasets/determinism#determinism_when_reading) or sort only if the number of data files is small?",
"We already return the result sorted at the end of `_resolve_single_pattern_locally` and `_resolve_single_pattern_in_dataset_repository` if I'm not mistaken",
"@lhoestq Oh, you are right. Feel free to ignore my comment.",
"I think the corresponding PR is ready to be merged :hugs: ",
"@albertvillanova Thanks for the fix!"
] | 2022-10-21T09:00:03Z
| 2022-10-27T09:51:49Z
| 2022-10-27T09:51:10Z
|
CONTRIBUTOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
For the `beans` dataset (did not try on other), the order of samples is not the same on different machines. Tested on my local laptop, github actions machine, and ec2 instance. The three yield a different order.
### Steps to reproduce the bug
In a clean docker container or conda environment with datasets==2.6.1, run
```python
from datasets import load_dataset
from pprint import pprint
data = load_dataset("beans", split="validation")
pprint(data["image_file_path"])
```
### Expected behavior
The order of the images is the same on all machines.
### Environment info
On the EC2 instance:
```
- `datasets` version: 2.6.1
- Platform: Linux-4.14.291-218.527.amzn2.x86_64-x86_64-with-glibc2.2.5
- Python version: 3.7.10
- PyArrow version: 9.0.0
- Pandas version: 1.3.5
- Numpy version: not checked
```
On my local laptop:
```
- `datasets` version: 2.6.1
- Platform: Linux-5.15.0-50-generic-x86_64-with-glibc2.35
- Python version: 3.9.12
- PyArrow version: 7.0.0
- Pandas version: 1.3.5
- Numpy version: 1.23.1
```
On github actions:
```
- `datasets` version: 2.6.1
- Platform: Linux-5.15.0-1022-azure-x86_64-with-glibc2.2.5
- Python version: 3.8.14
- PyArrow version: 9.0.0
- Pandas version: 1.5.1
- Numpy version: 1.23.4
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5145/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5145/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5933
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5933/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5933/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5933/events
|
https://github.com/huggingface/datasets/pull/5933
| 1,747,382,500
|
PR_kwDODunzps5Sfi5J
| 5,933
|
Fix `to_numpy` when None values in the sequence
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/qgallouedec",
"id": 45557362,
"login": "qgallouedec",
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"type": "User",
"url": "https://api.github.com/users/qgallouedec",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"I just added the same test with dynamic shape",
"_The documentation is not available anymore as the PR was closed or merged._",
"Awesome ! I'm merging now if you don't mind :)\r\nWe should probably give you permissions to merge your own PRs when you have an approval",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009980 / 0.011353 (-0.001373) | 0.005709 / 0.011008 (-0.005300) | 0.132185 / 0.038508 (0.093677) | 0.039299 / 0.023109 (0.016190) | 0.400168 / 0.275898 (0.124270) | 0.470582 / 0.323480 (0.147102) | 0.007753 / 0.007986 (-0.000233) | 0.005196 / 0.004328 (0.000868) | 0.093698 / 0.004250 (0.089448) | 0.052631 / 0.037052 (0.015579) | 0.430347 / 0.258489 (0.171858) | 0.460162 / 0.293841 (0.166321) | 0.057511 / 0.128546 (-0.071035) | 0.013944 / 0.075646 (-0.061702) | 0.459008 / 0.419271 (0.039737) | 0.075532 / 0.043533 (0.031999) | 0.405165 / 0.255139 (0.150026) | 0.456142 / 0.283200 (0.172942) | 0.117309 / 0.141683 (-0.024374) | 1.945787 / 1.452155 (0.493633) | 2.067162 / 1.492716 (0.574446) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.285755 / 0.018006 (0.267749) | 0.619965 / 0.000490 (0.619476) | 0.005071 / 0.000200 (0.004871) | 0.000114 / 0.000054 (0.000059) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031112 / 0.037411 (-0.006299) | 0.128514 / 0.014526 (0.113988) | 0.137161 / 0.176557 (-0.039396) | 0.211363 / 0.737135 (-0.525772) | 0.151045 / 0.296338 (-0.145293) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.609361 / 0.215209 (0.394152) | 6.124844 / 2.077655 (4.047189) | 2.440757 / 1.504120 (0.936637) | 2.034495 / 1.541195 (0.493300) | 2.047192 / 1.468490 (0.578702) | 0.883171 / 4.584777 (-3.701606) | 5.470552 / 3.745712 (1.724840) | 4.401696 / 5.269862 (-0.868165) | 2.378674 / 4.565676 (-2.187003) | 0.108065 / 0.424275 (-0.316210) | 0.013239 / 0.007607 (0.005632) | 0.830957 / 0.226044 (0.604913) | 8.090659 / 2.268929 (5.821731) | 3.289203 / 55.444624 (-52.155422) | 2.500777 / 6.876477 (-4.375700) | 2.561440 / 2.142072 (0.419367) | 1.064893 / 4.805227 (-3.740334) | 0.220486 / 6.500664 (-6.280178) | 0.079507 / 0.075469 (0.004038) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.544334 / 1.841788 (-0.297454) | 17.878997 / 8.074308 (9.804689) | 18.952191 / 10.191392 (8.760799) | 0.245166 / 0.680424 (-0.435258) | 0.028022 / 0.534201 (-0.506179) | 0.517828 / 0.579283 (-0.061455) | 0.618988 / 0.434364 (0.184624) | 0.589742 / 0.540337 (0.049405) | 0.670902 / 1.386936 (-0.716034) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009616 / 0.011353 (-0.001737) | 0.006098 / 0.011008 (-0.004911) | 0.100301 / 0.038508 (0.061793) | 0.037792 / 0.023109 (0.014683) | 0.484667 / 0.275898 (0.208769) | 0.519286 / 0.323480 (0.195806) | 0.007427 / 0.007986 (-0.000558) | 0.007172 / 0.004328 (0.002844) | 0.104429 / 0.004250 (0.100179) | 0.056567 / 0.037052 (0.019515) | 0.502641 / 0.258489 (0.244152) | 0.549629 / 0.293841 (0.255788) | 0.049574 / 0.128546 (-0.078972) | 0.015223 / 0.075646 (-0.060424) | 0.113947 / 0.419271 (-0.305324) | 0.064585 / 0.043533 (0.021053) | 0.512962 / 0.255139 (0.257823) | 0.507218 / 0.283200 (0.224019) | 0.122194 / 0.141683 (-0.019488) | 1.927821 / 1.452155 (0.475667) | 2.051161 / 1.492716 (0.558445) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.291350 / 0.018006 (0.273344) | 0.588099 / 0.000490 (0.587610) | 0.001368 / 0.000200 (0.001168) | 0.000153 / 0.000054 (0.000099) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030604 / 0.037411 (-0.006807) | 0.126810 / 0.014526 (0.112285) | 0.139309 / 0.176557 (-0.037248) | 0.208030 / 0.737135 (-0.529105) | 0.138985 / 0.296338 (-0.157353) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.681254 / 0.215209 (0.466045) | 6.753856 / 2.077655 (4.676201) | 2.780704 / 1.504120 (1.276585) | 2.475205 / 1.541195 (0.934010) | 2.486784 / 1.468490 (1.018294) | 0.879223 / 4.584777 (-3.705554) | 5.662294 / 3.745712 (1.916582) | 2.698705 / 5.269862 (-2.571156) | 1.660620 / 4.565676 (-2.905057) | 0.112218 / 0.424275 (-0.312057) | 0.014211 / 0.007607 (0.006604) | 0.796957 / 0.226044 (0.570913) | 8.180897 / 2.268929 (5.911969) | 3.540419 / 55.444624 (-51.904205) | 2.899467 / 6.876477 (-3.977010) | 2.870306 / 2.142072 (0.728233) | 1.069537 / 4.805227 (-3.735690) | 0.211281 / 6.500664 (-6.289383) | 0.078898 / 0.075469 (0.003429) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.666790 / 1.841788 (-0.174998) | 18.302127 / 8.074308 (10.227819) | 21.317546 / 10.191392 (11.126153) | 0.242795 / 0.680424 (-0.437629) | 0.026754 / 0.534201 (-0.507447) | 0.493375 / 0.579283 (-0.085908) | 0.605400 / 0.434364 (0.171036) | 0.586888 / 0.540337 (0.046550) | 0.722809 / 1.386936 (-0.664127) |\n\n</details>\n</details>\n\n\n"
] | 2023-06-08T08:38:56Z
| 2023-06-09T13:49:41Z
| 2023-06-09T13:23:48Z
|
MEMBER
| null | null | null |
Closes #5927
I've realized that the error was overlooked during testing due to the presence of only one None value in the sequence.
Unfortunately, it was the only case where the function works as expected. When the sequence contained more than one None value, the function failed. Consequently, I've updated the tests to include sequences with multiple None values.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5933/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5933/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5933.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5933",
"merged_at": "2023-06-09T13:23:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5933.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5933"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5425
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5425/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5425/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5425/events
|
https://github.com/huggingface/datasets/issues/5425
| 1,534,581,850
|
I_kwDODunzps5bd9xa
| 5,425
|
Sort on multiple keys with datasets.Dataset.sort()
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/101344863?v=4",
"events_url": "https://api.github.com/users/rocco-fortuna/events{/privacy}",
"followers_url": "https://api.github.com/users/rocco-fortuna/followers",
"following_url": "https://api.github.com/users/rocco-fortuna/following{/other_user}",
"gists_url": "https://api.github.com/users/rocco-fortuna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rocco-fortuna",
"id": 101344863,
"login": "rocco-fortuna",
"node_id": "U_kgDOBgpmXw",
"organizations_url": "https://api.github.com/users/rocco-fortuna/orgs",
"received_events_url": "https://api.github.com/users/rocco-fortuna/received_events",
"repos_url": "https://api.github.com/users/rocco-fortuna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rocco-fortuna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rocco-fortuna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rocco-fortuna",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] |
closed
| false
| null |
[] | null |
[
"Hi! \r\n\r\n`Dataset.sort` calls `df.sort_values` internally, and `df.sort_values` brings all the \"sort\" columns in memory, so sorting on multiple keys could be very expensive. This makes me think that maybe we can replace `df.sort_values` with `pyarrow.compute.sort_indices` - the latter can also sort on multiple keys and currently loads the data into memory; however, there is a plan to eventually implement \"memory-map\" friendly kernels for the Arrow compute ops (using the Acero execution engine). \r\n\r\nSo to address this issue, you should replace `df.sort_values` with `pyarrow.compute.sort_indices` in `Dataset.sort` and adjust the signature of this function (deprecate the `kind` parameter, etc.).\r\n\r\nPS: Feel free to ping us if you need some additional help/pointers",
"@mariosasko If I understand the code right, using `pyarrow.compute.sort_indices` would also require changes to the `select` method if it is meant to sort multiple keys. That's because `select` only accepts 1D input for `indices`, not an iterable or similar which would be required for multiple keys unless you want some looping over selects. Doesn't seem that straight-forward but I might be missing something here... ",
"@MichlF No, it doesn't require modifying select because sorting on multiple keys also returns a 1D array.\r\n\r\nIt's easier to understand with an example:\r\n```python\r\n>>> import pyarrow as pa\r\n>>> import pyarrow.compute as pc\r\n>>> table = pa.table({\r\n... \"name\": [\"John\", \"Eve\", \"Peter\", \"John\"],\r\n... \"surname\": [\"Johnson\", \"Smith\", \"Smith\", \"Doe\"],\r\n... \"age\": [20, 40, 30, 50],\r\n... })\r\n>>> indices = pc.sort_indices(table, sort_keys=[(\"name\", \"ascending\"), (\"surname\", \"ascending\")])\r\n>>> print(indices)\r\n[\r\n 1,\r\n 3,\r\n 0,\r\n 2\r\n]\r\n```\r\n\r\n",
"Thanks for clarifying.\r\nI can prepare a PR to address this issue. This would be my first PR here so I have a few maybe silly questions but:\r\n- What is the preferred input type of `sort_keys` for the sort method? A sequence with name, order tuples like pyarrow's `sort_indices` requires?\r\n- What about backwards compatability: is it supposed to also accept the old way of calling sort() or should both `column` and `kind` be deprecated?\r\n- If `sort_keys` is provided in the same format as for pyarrow's `sort_indices` - i.e. along with order for each column -, `reverse` doesn't make much sense either and should be deprecated as well I assume.",
"I think we can have the following signature:\r\n```python\r\ndef sort(\r\n self,\r\n column_names: Union[str, Sequence[str]],\r\n reverse: Union[bool, Sequence[bool]] = False,\r\n kind=\"deprecated\",\r\n null_placement: str = \"last\",\r\n keep_in_memory: bool = False,\r\n load_from_cache_file: bool = True,\r\n indices_cache_file_name: Optional[str] = None,\r\n writer_batch_size: Optional[int] = 1000,\r\n new_fingerprint: Optional[str] = None,\r\n ) -> \"Dataset\":\r\n``` \r\n\r\nSo we should:\r\n* rename`column` to `column_names`. `column` is a positional argument, so it's OK to rename it (not marked as positional-only with \"/\", but still should be fine)\r\n* deprecate `kind`\r\n* keep `reverse` instead of introducing `sort_keys`, but we should allow passing a list of booleans that defines the sort order of each column from `column_names` to it (`reverse = False` would be equal to `[False] * len(column_names)` and `reverse = True` to `[True] * len(column_names)`)",
"I am pretty much done with the PR. Just one clarification: `Sequence` in `arrow_dataset.py` is a custom dataclass from `features.py` instead of the `type.hinting` class `Sequence` from Python. Do you suggest using that custom `Sequence` class somehow ? Otherwise signature currently reads instead:\r\n```Python\r\n def sort(\r\n self,\r\n column_names: Union[str, List[str]],\r\n reverse: Union[bool, List[bool]] = False,\r\n kind = \"deprecated\",\r\n null_placement: str = \"last\",\r\n keep_in_memory: bool = False,\r\n load_from_cache_file: bool = True,\r\n indices_cache_file_name: Optional[str] = None,\r\n writer_batch_size: Optional[int] = 1000,\r\n new_fingerprint: Optional[str] = None,\r\n )\r\n```\r\n\r\nAlso, to maintain backwards compatibility, I added conditionals for `null_placement`, because pyarrow's `null_placement` only accepts `at_start` and `at_end`, and not `last` and `first`.\r\nIf that is all good, I think I can open the PR.",
"I meant `typing.Sequence` (`datasets.Sequence` is a feature type). \r\n\r\nRegarding `null_placement`, I think we can support both `at_start` and `at_end`, and `last` and `first` (for backward compatibility; convert internally to `at_end` and `at_start` respectively).",
"> I meant typing.Sequence (datasets.Sequence is a feature type).\r\n\r\nSorry, I actually meant `typing.Sequence` and not `type.hinting`. However, the issue is still that `dataset.Sequence` is imported in `arrow_dataset.py` so I cannot import and use `typing.Sequence` for the `sort`'s signature without overwriting the `dataset.Sequence` import. The latter is used in the `align_labels_with_mapping` method so it's a necessary import for `arrow_dataset.py`. \r\nTo import `typing.Sequence` as something else than `Sequence` to avoid overwriting may only be confusing and doesn't seem good practice!? The other solution is to keep `List` type hinting as in the signature I posted in my previous post but this excludes other Sequence types and may cause problems further down the line.\r\nPlease advise,\r\nThanks for all the clarifications!",
"You can avoid the name collision by renaming `typing.Sequence` to `Sequence_` when importing:\r\n```python\r\nfrom typing import Sequence as Sequence_\r\n```",
"Resolved via #5502 "
] | 2023-01-16T09:22:26Z
| 2023-02-24T16:15:11Z
| 2023-02-24T16:15:11Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
From discussion on forum: https://discuss.huggingface.co/t/datasets-dataset-sort-does-not-preserve-ordering/29065/1
`sort()` does not preserve ordering, and it does not support sorting on multiple columns, nor a key function.
The suggested solution:
> ... having something similar to pandas and be able to specify multiple columns for sorting. We’re already using pandas under the hood to do the sorting in datasets.
The suggested workaround:
> convert your dataset to pandas and use `df.sort_values()`
### Motivation
Preserved ordering when sorting is very handy when one needs to sort on multiple columns, A and B, so that e.g. whenever A is equal for two or more rows, B is kept sorted.
Having a parameter to do this in 🤗datasets would be cleaner than going through pandas and back, and it wouldn't add much complexity to the library.
Alternatives:
- the possibility to specify multiple keys to sort by with decreasing priority (suggested solution),
- the ability to provide a key function for sorting, so that one can manually specify the sorting criteria.
### Your contribution
I'll be happy to contribute by submitting a PR. Will get documented on `CONTRIBUTING.MD`.
Would love to get thoughts on this, if anyone has anything to add.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5425/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5425/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6926
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6926/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6926/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6926/events
|
https://github.com/huggingface/datasets/pull/6926
| 2,322,164,287
|
PR_kwDODunzps5w0uII
| 6,926
|
Update process.mdx: Fix code listing in Shard section
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16918280?v=4",
"events_url": "https://api.github.com/users/FadyMorris/events{/privacy}",
"followers_url": "https://api.github.com/users/FadyMorris/followers",
"following_url": "https://api.github.com/users/FadyMorris/following{/other_user}",
"gists_url": "https://api.github.com/users/FadyMorris/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/FadyMorris",
"id": 16918280,
"login": "FadyMorris",
"node_id": "MDQ6VXNlcjE2OTE4Mjgw",
"organizations_url": "https://api.github.com/users/FadyMorris/orgs",
"received_events_url": "https://api.github.com/users/FadyMorris/received_events",
"repos_url": "https://api.github.com/users/FadyMorris/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/FadyMorris/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FadyMorris/subscriptions",
"type": "User",
"url": "https://api.github.com/users/FadyMorris",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2024-05-29T01:25:55Z
| 2024-05-29T03:11:20Z
| 2024-05-29T03:11:08Z
|
CONTRIBUTOR
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/16918280?v=4",
"events_url": "https://api.github.com/users/FadyMorris/events{/privacy}",
"followers_url": "https://api.github.com/users/FadyMorris/followers",
"following_url": "https://api.github.com/users/FadyMorris/following{/other_user}",
"gists_url": "https://api.github.com/users/FadyMorris/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/FadyMorris",
"id": 16918280,
"login": "FadyMorris",
"node_id": "MDQ6VXNlcjE2OTE4Mjgw",
"organizations_url": "https://api.github.com/users/FadyMorris/orgs",
"received_events_url": "https://api.github.com/users/FadyMorris/received_events",
"repos_url": "https://api.github.com/users/FadyMorris/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/FadyMorris/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FadyMorris/subscriptions",
"type": "User",
"url": "https://api.github.com/users/FadyMorris",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6926/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6926/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6926.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6926",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6926.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6926"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4679
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4679/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4679/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4679/events
|
https://github.com/huggingface/datasets/pull/4679
| 1,303,980,648
|
PR_kwDODunzps47XX67
| 4,679
|
Added method to remove excess nesting in a DatasetDict
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/37946988?v=4",
"events_url": "https://api.github.com/users/CakeCrusher/events{/privacy}",
"followers_url": "https://api.github.com/users/CakeCrusher/followers",
"following_url": "https://api.github.com/users/CakeCrusher/following{/other_user}",
"gists_url": "https://api.github.com/users/CakeCrusher/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/CakeCrusher",
"id": 37946988,
"login": "CakeCrusher",
"node_id": "MDQ6VXNlcjM3OTQ2OTg4",
"organizations_url": "https://api.github.com/users/CakeCrusher/orgs",
"received_events_url": "https://api.github.com/users/CakeCrusher/received_events",
"repos_url": "https://api.github.com/users/CakeCrusher/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/CakeCrusher/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CakeCrusher/subscriptions",
"type": "User",
"url": "https://api.github.com/users/CakeCrusher",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi ! I think the issue you linked is closed and suggests to use `remove_columns`.\r\n\r\nMoreover if you end up with a dataset with an unnecessarily nested data, please modify your processing functions to not output nested data, or use `map(..., batched=True)` if you function take batches as input",
"Hi @lhoestq , you are right about the issues this pull has steered beyond that issue. I created this [colab notebook](https://colab.research.google.com/drive/16aLu6QrDSV_aUYRdpufl5E4iS08qkUGj?usp=sharing) to present the error. I tried using batch and that won't resolve it either. I'm looking into that error right now.",
"I think you just need to pass one example at a time to your tokenizer, this way you don't end up with nested data:\r\n```python\r\n\r\ndef preprocessFunction(row):\r\n collatedContext = tokenizer.eos_token.join([row[\"context\"+str(i+1)] for i in range(int(AMT_OF_CONTEXT))])\r\n response = row[\"response\"]\r\n tokenizedContext = tokenizer(\r\n collatedContext, max_length=max_context_length, truncation=True # don't pass as a list here\r\n )\r\n with tokenizer.as_target_tokenizer():\r\n tokenized_response = tokenizer(\r\n response, max_length=max_response_length, truncation=True # don't pass a a list here\r\n )\r\n tokenizedContext[\"labels\"] = tokenized_response[\"input_ids\"]\r\n return tokenizedContext\r\n```",
"Yes that is correct, the purpose of this pull is to advise of a more general solution like with `def remove_excess_nesting(self)` or maybe automate the solution (stas00 advised not to automate it as it could \"not be backwards compatible\").",
"I'm not sure I understand how having `remove_excess_nesting` would make more sense than just fixing the preprocessFunction to simply not return nested samples, can you elaborate ?",
"Figuring out the issue can be a bit difficult to figure out. Only until I added batch does it make a little more sense with the error\r\n\r\n> sequence item 0: expected str instance, list found\r\n\r\nbut batch was never intended.\r\n\r\nWhen you run the colab you will notice that only until collating do you learn there is this error. So i figured it would be better to address it during at the `DatasetDict` level.\r\nI think it would be ideal if the user could be notified at the preprocess function.",
"I'm not arguing that `remove_excess_nesting` is the right solution but what I aim to address is dealing with unnecessary nesting as early as possible.",
"> When you run the colab you will notice that only until collating do you learn there is this error.\r\n\r\nI think users can just check the `dataset.features` and they would notice that the data are nested\r\n```python\r\n{\r\n 'input_ids': Sequence(Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None), length=-1, id=None)\r\n ...\r\n}\r\n```\r\n\r\nSometime nested data are intentional, so you can't know in advance if it's a user's mistake or something planned.",
"Yes, I understand, it could be intentional and only the collator has problems with it. So, it is not worth handling it any differently in any other non-erroneous data. \r\n\r\nThat being said do you think there is any use for the `remove_excess_nesting` method? Or maybe it should be applied in a different way? If not feel free to close this PR. ",
"I think users can write it and use `map` themselves if needed, it is pretty straightforward to implement.\r\n\r\nI'm closing this PR if you don't mind, and thank you for the discussion :)",
"No problem @lhoestq , thanks for walking me through it."
] | 2022-07-13T21:49:37Z
| 2022-07-21T15:55:26Z
| 2022-07-21T10:55:02Z
|
NONE
| null | null | null |
Added the ability for a DatasetDict to remove additional nested layers within its features to avoid conflicts when collating. It is meant to accompany [this PR](https://github.com/huggingface/transformers/pull/18119) to resolve the same issue [#15505](https://github.com/huggingface/transformers/issues/15505).
@stas00 @lhoestq
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4679/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4679/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4679.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4679",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4679.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4679"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7149
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7149/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7149/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7149/events
|
https://github.com/huggingface/datasets/issues/7149
| 2,524,497,448
|
I_kwDODunzps6WeMYo
| 7,149
|
Datasets Unknown Keyword Argument Error - task_templates
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/51288316?v=4",
"events_url": "https://api.github.com/users/varungupta31/events{/privacy}",
"followers_url": "https://api.github.com/users/varungupta31/followers",
"following_url": "https://api.github.com/users/varungupta31/following{/other_user}",
"gists_url": "https://api.github.com/users/varungupta31/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/varungupta31",
"id": 51288316,
"login": "varungupta31",
"node_id": "MDQ6VXNlcjUxMjg4MzE2",
"organizations_url": "https://api.github.com/users/varungupta31/orgs",
"received_events_url": "https://api.github.com/users/varungupta31/received_events",
"repos_url": "https://api.github.com/users/varungupta31/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/varungupta31/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/varungupta31/subscriptions",
"type": "User",
"url": "https://api.github.com/users/varungupta31",
"user_view_type": "public"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null |
[
"Thanks, for reporting.\r\n\r\nWe have been fixing most Hub datasets to remove the deprecated (and now non-supported) task templates, but we missed the \"facebook/winoground\".\r\n\r\nIt is fixed now: https://huggingface.co/datasets/facebook/winoground/discussions/8\r\n\r\n",
"Hello @albertvillanova \r\n\r\nI got the same error while loading this dataset: https://huggingface.co/datasets/alaleye/aloresb...\r\n\r\nHow can I fix it ? \r\nThanks",
"I am getting the same error on the below code, any fix to this ?\n\n```\nfrom datasets import load_dataset\n\nminds = load_dataset(\"PolyAI/minds14\", name=\"en-AU\", split=\"train\")\nminds\n```"
] | 2024-09-13T10:30:57Z
| 2025-03-06T07:11:55Z
| 2024-09-13T14:10:48Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Issue
```python
from datasets import load_dataset
examples = load_dataset('facebook/winoground', use_auth_token=<YOUR USER ACCESS TOKEN>)
```
Gives error
```
TypeError: DatasetInfo.__init__() got an unexpected keyword argument 'task_templates'
```
A simple downgrade to lower `datasets v 2.21.0` solves it.
### Steps to reproduce the bug
1. `pip install datsets`
2.
```python
from datasets import load_dataset
examples = load_dataset('facebook/winoground', use_auth_token=<YOUR USER ACCESS TOKEN>)
```
### Expected behavior
Should load the dataset correctly.
### Environment info
- Datasets version `3.0.0`
- `transformers` version: 4.45.0.dev0
- Platform: Linux-6.8.0-40-generic-x86_64-with-glibc2.35
- Python version: 3.12.4
- Huggingface_hub version: 0.24.6
- Safetensors version: 0.4.5
- Accelerate version: 0.35.0.dev0
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7149/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7149/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/4830
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4830/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4830/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4830/events
|
https://github.com/huggingface/datasets/pull/4830
| 1,336,177,937
|
PR_kwDODunzps49Cdro
| 4,830
|
Fix task tags in dataset cards
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The non-passing tests are caused by other missing information in the dataset cards."
] | 2022-08-11T16:06:06Z
| 2022-08-11T16:37:27Z
| 2022-08-11T16:23:00Z
|
MEMBER
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4830/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4830/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4830.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4830",
"merged_at": "2022-08-11T16:23:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4830.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4830"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6435
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6435/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6435/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6435/events
|
https://github.com/huggingface/datasets/issues/6435
| 2,000,690,513
|
I_kwDODunzps53QB1R
| 6,435
|
Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17604849?v=4",
"events_url": "https://api.github.com/users/kopyl/events{/privacy}",
"followers_url": "https://api.github.com/users/kopyl/followers",
"following_url": "https://api.github.com/users/kopyl/following{/other_user}",
"gists_url": "https://api.github.com/users/kopyl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kopyl",
"id": 17604849,
"login": "kopyl",
"node_id": "MDQ6VXNlcjE3NjA0ODQ5",
"organizations_url": "https://api.github.com/users/kopyl/orgs",
"received_events_url": "https://api.github.com/users/kopyl/received_events",
"repos_url": "https://api.github.com/users/kopyl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kopyl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kopyl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kopyl",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"[This doc section](https://huggingface.co/docs/datasets/main/en/process#multiprocessing) explains how to modify the script to avoid this error.",
"@mariosasko thank you very much, i'll check it",
"@mariosasko no it does not\r\n\r\n`Dataset.filter() got an unexpected keyword argument 'with_rank'`"
] | 2023-11-19T04:21:16Z
| 2024-01-27T17:14:20Z
| 2023-12-04T16:57:43Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
1. I ran dataset mapping with `num_proc=6` in it and got this error:
`RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method`
I can't actually find a way to run multi-GPU dataset mapping. Can you help?
### Steps to reproduce the bug
1. Rund SDXL training with `num_proc=6`: https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_sdxl.py
### Expected behavior
Should work well
### Environment info
6x A100 SXM, Linux
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6435/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6435/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5579
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5579/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5579/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5579/events
|
https://github.com/huggingface/datasets/pull/5579
| 1,599,732,211
|
PR_kwDODunzps5Kwgo4
| 5,579
|
Add instructions to create `DataLoader` from augmented dataset in object detection guide
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/21087104?v=4",
"events_url": "https://api.github.com/users/Laurent2916/events{/privacy}",
"followers_url": "https://api.github.com/users/Laurent2916/followers",
"following_url": "https://api.github.com/users/Laurent2916/following{/other_user}",
"gists_url": "https://api.github.com/users/Laurent2916/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Laurent2916",
"id": 21087104,
"login": "Laurent2916",
"node_id": "MDQ6VXNlcjIxMDg3MTA0",
"organizations_url": "https://api.github.com/users/Laurent2916/orgs",
"received_events_url": "https://api.github.com/users/Laurent2916/received_events",
"repos_url": "https://api.github.com/users/Laurent2916/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Laurent2916/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Laurent2916/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Laurent2916",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5579). All of your documentation changes will be reflected on that endpoint.",
"I'm not sure we need this part as we provide a link to the notebook that shows how to train an object detection model, and this notebook instantiates a `DataLoader` before training the model. I'd like to hear what @stevhliu thinks.\r\n\r\nPS: Your `collate_fn` calls `torch.stack` on the `bbox` tensors, which don't have the same shape, so this will fail.",
"I agree with @mariosasko; we also have a [Use with PyTorch](https://huggingface.co/docs/datasets/use_with_pytorch) guide that shows how you can create a `DataLoader`. "
] | 2023-02-25T14:53:17Z
| 2023-03-23T19:24:59Z
| 2023-03-23T19:24:50Z
|
CONTRIBUTOR
| null | null | null |
The following adds instructions on how to create a `DataLoader` from the guide on how to use object detection with augmentations (#4710). I am open to hearing any suggestions for improvement !
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/21087104?v=4",
"events_url": "https://api.github.com/users/Laurent2916/events{/privacy}",
"followers_url": "https://api.github.com/users/Laurent2916/followers",
"following_url": "https://api.github.com/users/Laurent2916/following{/other_user}",
"gists_url": "https://api.github.com/users/Laurent2916/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Laurent2916",
"id": 21087104,
"login": "Laurent2916",
"node_id": "MDQ6VXNlcjIxMDg3MTA0",
"organizations_url": "https://api.github.com/users/Laurent2916/orgs",
"received_events_url": "https://api.github.com/users/Laurent2916/received_events",
"repos_url": "https://api.github.com/users/Laurent2916/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Laurent2916/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Laurent2916/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Laurent2916",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5579/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5579/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5579.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5579",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5579.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5579"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7225
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7225/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7225/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7225/events
|
https://github.com/huggingface/datasets/issues/7225
| 2,586,229,216
|
I_kwDODunzps6aJrng
| 7,225
|
Huggingface GIT returns null as Content-Type instead of application/x-git-receive-pack-result
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3961950?v=4",
"events_url": "https://api.github.com/users/padmalcom/events{/privacy}",
"followers_url": "https://api.github.com/users/padmalcom/followers",
"following_url": "https://api.github.com/users/padmalcom/following{/other_user}",
"gists_url": "https://api.github.com/users/padmalcom/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/padmalcom",
"id": 3961950,
"login": "padmalcom",
"node_id": "MDQ6VXNlcjM5NjE5NTA=",
"organizations_url": "https://api.github.com/users/padmalcom/orgs",
"received_events_url": "https://api.github.com/users/padmalcom/received_events",
"repos_url": "https://api.github.com/users/padmalcom/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/padmalcom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/padmalcom/subscriptions",
"type": "User",
"url": "https://api.github.com/users/padmalcom",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2024-10-14T14:33:06Z
| 2024-10-14T14:33:06Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
We push changes to our datasets programmatically. Our git client jGit reports that the hf git server returns null as Content-Type after a push.
### Steps to reproduce the bug
A basic kotlin application:
```
val person = PersonIdent(
"padmalcom",
"padmalcom@sth.com"
)
val cp = UsernamePasswordCredentialsProvider(
"padmalcom",
"mysecrettoken"
)
val git =
KGit.cloneRepository {
setURI("https://huggingface.co/datasets/sth/images")
setTimeout(60)
setProgressMonitor(TextProgressMonitor())
setCredentialsProvider(cp)
}
FileOutputStream("./images/images.csv").apply { writeCsv(images) }
git.add {
addFilepattern("images.csv")
}
for (i in images) {
FileUtils.copyFile(
File("./files/${i.id}"),
File("./images/${i.id + File(i.fileName).extension }")
)
git.add {
addFilepattern("${i.id + File(i.fileName).extension }")
}
}
val revCommit = git.commit {
author = person
message = "Uploading images at " + LocalDateTime.now()
.format(DateTimeFormatter.ISO_DATE_TIME)
setCredentialsProvider(cp)
}
val push = git.push {
setCredentialsProvider(cp)
}
```
### Expected behavior
The git server is expected to return the Content-Type _application/x-git-receive-pack-result_.
### Environment info
It is independent from the datasets library.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7225/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7225/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6041
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6041/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6041/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6041/events
|
https://github.com/huggingface/datasets/pull/6041
| 1,807,441,055
|
PR_kwDODunzps5Vp0GX
| 6,041
|
Flatten repository_structure docs on yaml
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6041). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007587 / 0.011353 (-0.003766) | 0.004469 / 0.011008 (-0.006540) | 0.098028 / 0.038508 (0.059520) | 0.086378 / 0.023109 (0.063269) | 0.412290 / 0.275898 (0.136392) | 0.449912 / 0.323480 (0.126432) | 0.004769 / 0.007986 (-0.003217) | 0.003708 / 0.004328 (-0.000621) | 0.075541 / 0.004250 (0.071290) | 0.063821 / 0.037052 (0.026768) | 0.417213 / 0.258489 (0.158724) | 0.471954 / 0.293841 (0.178113) | 0.036243 / 0.128546 (-0.092303) | 0.009540 / 0.075646 (-0.066106) | 0.339043 / 0.419271 (-0.080228) | 0.061853 / 0.043533 (0.018320) | 0.418510 / 0.255139 (0.163371) | 0.462372 / 0.283200 (0.179173) | 0.027328 / 0.141683 (-0.114355) | 1.745114 / 1.452155 (0.292959) | 1.879839 / 1.492716 (0.387123) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211042 / 0.018006 (0.193035) | 0.512865 / 0.000490 (0.512375) | 0.008744 / 0.000200 (0.008544) | 0.000121 / 0.000054 (0.000066) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032493 / 0.037411 (-0.004918) | 0.096472 / 0.014526 (0.081946) | 0.110340 / 0.176557 (-0.066216) | 0.183195 / 0.737135 (-0.553940) | 0.112829 / 0.296338 (-0.183510) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.478040 / 0.215209 (0.262830) | 4.743776 / 2.077655 (2.666121) | 2.389770 / 1.504120 (0.885650) | 2.168468 / 1.541195 (0.627274) | 2.238154 / 1.468490 (0.769663) | 0.572308 / 4.584777 (-4.012469) | 4.154783 / 3.745712 (0.409071) | 3.771509 / 5.269862 (-1.498353) | 2.384828 / 4.565676 (-2.180848) | 0.068122 / 0.424275 (-0.356153) | 0.008573 / 0.007607 (0.000965) | 0.560300 / 0.226044 (0.334256) | 5.591163 / 2.268929 (3.322235) | 2.929660 / 55.444624 (-52.514965) | 2.517721 / 6.876477 (-4.358756) | 2.762285 / 2.142072 (0.620213) | 0.687193 / 4.805227 (-4.118034) | 0.157839 / 6.500664 (-6.342825) | 0.071862 / 0.075469 (-0.003607) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.484788 / 1.841788 (-0.357000) | 21.696071 / 8.074308 (13.621763) | 15.476166 / 10.191392 (5.284774) | 0.185034 / 0.680424 (-0.495390) | 0.021181 / 0.534201 (-0.513020) | 0.463324 / 0.579283 (-0.115959) | 0.502455 / 0.434364 (0.068091) | 0.559880 / 0.540337 (0.019543) | 0.767281 / 1.386936 (-0.619655) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007572 / 0.011353 (-0.003781) | 0.004331 / 0.011008 (-0.006677) | 0.075023 / 0.038508 (0.036515) | 0.085474 / 0.023109 (0.062365) | 0.464900 / 0.275898 (0.189002) | 0.503348 / 0.323480 (0.179868) | 0.006885 / 0.007986 (-0.001101) | 0.003647 / 0.004328 (-0.000681) | 0.074874 / 0.004250 (0.070623) | 0.071076 / 0.037052 (0.034024) | 0.465495 / 0.258489 (0.207006) | 0.506418 / 0.293841 (0.212577) | 0.038900 / 0.128546 (-0.089647) | 0.009467 / 0.075646 (-0.066180) | 0.082547 / 0.419271 (-0.336724) | 0.058457 / 0.043533 (0.014924) | 0.459114 / 0.255139 (0.203975) | 0.484872 / 0.283200 (0.201673) | 0.027443 / 0.141683 (-0.114240) | 1.713996 / 1.452155 (0.261841) | 1.893639 / 1.492716 (0.400922) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.248693 / 0.018006 (0.230687) | 0.488805 / 0.000490 (0.488315) | 0.000421 / 0.000200 (0.000221) | 0.000067 / 0.000054 (0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034886 / 0.037411 (-0.002525) | 0.103215 / 0.014526 (0.088689) | 0.116422 / 0.176557 (-0.060134) | 0.182789 / 0.737135 (-0.554346) | 0.117788 / 0.296338 (-0.178550) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.482782 / 0.215209 (0.267573) | 4.802895 / 2.077655 (2.725241) | 2.489823 / 1.504120 (0.985703) | 2.324005 / 1.541195 (0.782810) | 2.457674 / 1.468490 (0.989184) | 0.566980 / 4.584777 (-4.017797) | 4.117359 / 3.745712 (0.371647) | 3.841180 / 5.269862 (-1.428681) | 2.322410 / 4.565676 (-2.243266) | 0.066367 / 0.424275 (-0.357908) | 0.008501 / 0.007607 (0.000894) | 0.561453 / 0.226044 (0.335408) | 5.694861 / 2.268929 (3.425932) | 3.129829 / 55.444624 (-52.314796) | 2.647375 / 6.876477 (-4.229102) | 2.673071 / 2.142072 (0.530998) | 0.676120 / 4.805227 (-4.129108) | 0.153483 / 6.500664 (-6.347181) | 0.070797 / 0.075469 (-0.004672) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.575697 / 1.841788 (-0.266091) | 22.447462 / 8.074308 (14.373154) | 15.964906 / 10.191392 (5.773514) | 0.218343 / 0.680424 (-0.462081) | 0.021051 / 0.534201 (-0.513150) | 0.466079 / 0.579283 (-0.113204) | 0.493190 / 0.434364 (0.058826) | 0.565929 / 0.540337 (0.025592) | 0.768638 / 1.386936 (-0.618298) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006268 / 0.011353 (-0.005085) | 0.003715 / 0.011008 (-0.007293) | 0.080628 / 0.038508 (0.042120) | 0.070294 / 0.023109 (0.047185) | 0.404749 / 0.275898 (0.128851) | 0.434130 / 0.323480 (0.110650) | 0.005533 / 0.007986 (-0.002452) | 0.002980 / 0.004328 (-0.001349) | 0.063016 / 0.004250 (0.058766) | 0.051667 / 0.037052 (0.014615) | 0.403859 / 0.258489 (0.145370) | 0.437913 / 0.293841 (0.144073) | 0.027518 / 0.128546 (-0.101029) | 0.007991 / 0.075646 (-0.067655) | 0.260723 / 0.419271 (-0.158548) | 0.046580 / 0.043533 (0.003047) | 0.405453 / 0.255139 (0.150314) | 0.428390 / 0.283200 (0.145190) | 0.022774 / 0.141683 (-0.118909) | 1.488204 / 1.452155 (0.036049) | 1.536557 / 1.492716 (0.043841) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.185864 / 0.018006 (0.167858) | 0.431388 / 0.000490 (0.430898) | 0.003743 / 0.000200 (0.003543) | 0.000065 / 0.000054 (0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024062 / 0.037411 (-0.013350) | 0.075749 / 0.014526 (0.061224) | 0.083519 / 0.176557 (-0.093037) | 0.147965 / 0.737135 (-0.589170) | 0.085635 / 0.296338 (-0.210703) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.400455 / 0.215209 (0.185246) | 4.084294 / 2.077655 (2.006640) | 1.928795 / 1.504120 (0.424675) | 1.743205 / 1.541195 (0.202010) | 1.811233 / 1.468490 (0.342743) | 0.504976 / 4.584777 (-4.079801) | 3.073134 / 3.745712 (-0.672578) | 2.816357 / 5.269862 (-2.453505) | 1.857462 / 4.565676 (-2.708214) | 0.058329 / 0.424275 (-0.365946) | 0.006850 / 0.007607 (-0.000757) | 0.466017 / 0.226044 (0.239973) | 4.660158 / 2.268929 (2.391230) | 2.396614 / 55.444624 (-53.048010) | 2.007491 / 6.876477 (-4.868986) | 2.206997 / 2.142072 (0.064925) | 0.592233 / 4.805227 (-4.212994) | 0.125364 / 6.500664 (-6.375300) | 0.061166 / 0.075469 (-0.014303) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.290148 / 1.841788 (-0.551640) | 18.317462 / 8.074308 (10.243154) | 13.465142 / 10.191392 (3.273750) | 0.149696 / 0.680424 (-0.530728) | 0.017120 / 0.534201 (-0.517081) | 0.334818 / 0.579283 (-0.244465) | 0.363976 / 0.434364 (-0.070388) | 0.388271 / 0.540337 (-0.152066) | 0.542383 / 1.386936 (-0.844553) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006029 / 0.011353 (-0.005324) | 0.003656 / 0.011008 (-0.007352) | 0.063518 / 0.038508 (0.025010) | 0.058214 / 0.023109 (0.035105) | 0.435987 / 0.275898 (0.160089) | 0.442769 / 0.323480 (0.119289) | 0.004675 / 0.007986 (-0.003310) | 0.002911 / 0.004328 (-0.001418) | 0.063020 / 0.004250 (0.058769) | 0.049422 / 0.037052 (0.012369) | 0.435521 / 0.258489 (0.177032) | 0.478251 / 0.293841 (0.184411) | 0.027294 / 0.128546 (-0.101252) | 0.008073 / 0.075646 (-0.067574) | 0.068397 / 0.419271 (-0.350875) | 0.044796 / 0.043533 (0.001263) | 0.416646 / 0.255139 (0.161507) | 0.435021 / 0.283200 (0.151821) | 0.024686 / 0.141683 (-0.116997) | 1.495650 / 1.452155 (0.043496) | 1.495846 / 1.492716 (0.003130) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211205 / 0.018006 (0.193199) | 0.414497 / 0.000490 (0.414007) | 0.001704 / 0.000200 (0.001504) | 0.000073 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025237 / 0.037411 (-0.012174) | 0.077291 / 0.014526 (0.062765) | 0.085736 / 0.176557 (-0.090821) | 0.141059 / 0.737135 (-0.596076) | 0.087620 / 0.296338 (-0.208719) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.421995 / 0.215209 (0.206786) | 4.158503 / 2.077655 (2.080849) | 2.313598 / 1.504120 (0.809479) | 2.183553 / 1.541195 (0.642359) | 2.279656 / 1.468490 (0.811166) | 0.500146 / 4.584777 (-4.084631) | 3.092654 / 3.745712 (-0.653059) | 4.371616 / 5.269862 (-0.898245) | 2.605096 / 4.565676 (-1.960581) | 0.057658 / 0.424275 (-0.366617) | 0.006574 / 0.007607 (-0.001033) | 0.491455 / 0.226044 (0.265411) | 4.926730 / 2.268929 (2.657801) | 2.635749 / 55.444624 (-52.808875) | 2.255780 / 6.876477 (-4.620697) | 2.305547 / 2.142072 (0.163474) | 0.589027 / 4.805227 (-4.216200) | 0.126229 / 6.500664 (-6.374435) | 0.063268 / 0.075469 (-0.012201) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.299102 / 1.841788 (-0.542686) | 18.547417 / 8.074308 (10.473109) | 13.860030 / 10.191392 (3.668638) | 0.145482 / 0.680424 (-0.534942) | 0.016543 / 0.534201 (-0.517658) | 0.330788 / 0.579283 (-0.248496) | 0.362020 / 0.434364 (-0.072344) | 0.380635 / 0.540337 (-0.159703) | 0.517375 / 1.386936 (-0.869561) |\n\n</details>\n</details>\n\n\n"
] | 2023-07-17T10:15:10Z
| 2023-07-17T10:24:51Z
| 2023-07-17T10:16:22Z
|
MEMBER
| null | null | null |
To have Splits, Configurations and Builder parameters at the same doc level
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6041/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6041/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6041.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6041",
"merged_at": "2023-07-17T10:16:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6041.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6041"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6647
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6647/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6647/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6647/events
|
https://github.com/huggingface/datasets/pull/6647
| 2,123,397,569
|
PR_kwDODunzps5mSB2B
| 6,647
|
Update loading.mdx to include "jsonl" file loading.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/22236370?v=4",
"events_url": "https://api.github.com/users/mosheber/events{/privacy}",
"followers_url": "https://api.github.com/users/mosheber/followers",
"following_url": "https://api.github.com/users/mosheber/following{/other_user}",
"gists_url": "https://api.github.com/users/mosheber/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mosheber",
"id": 22236370,
"login": "mosheber",
"node_id": "MDQ6VXNlcjIyMjM2Mzcw",
"organizations_url": "https://api.github.com/users/mosheber/orgs",
"received_events_url": "https://api.github.com/users/mosheber/received_events",
"repos_url": "https://api.github.com/users/mosheber/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mosheber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mosheber/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mosheber",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6647). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"> Thanks for adding the explicit loading command.\r\n> \r\n> However, I would move it just below, where we present the JSON-Lines example.\r\n> \r\n> * Maybe adding that this format is called JSON-Lines\r\n> * Add the example after the JSON-Lines data example\r\n> \r\n> https://github.com/huggingface/datasets/blob/14d9afbb7ae1b787c450261ca0ff374551993031/docs/source/loading.mdx#L135-L138\r\n\r\nThank you @albertvillanova for the feedback! I moved the jsonl file loading example to a more appropriate location. "
] | 2024-02-07T16:18:08Z
| 2024-02-08T15:34:17Z
| null |
NONE
| null | null | null |
* A small update to the documentation, noting the ability to load jsonl files.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6647/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6647/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6647.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6647",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6647.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6647"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6562
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6562/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6562/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6562/events
|
https://github.com/huggingface/datasets/issues/6562
| 2,067,904,504
|
I_kwDODunzps57Qbf4
| 6,562
|
datasets.DownloadMode.FORCE_REDOWNLOAD use cache to download dataset features with load_dataset function
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/73234162?v=4",
"events_url": "https://api.github.com/users/LsTam91/events{/privacy}",
"followers_url": "https://api.github.com/users/LsTam91/followers",
"following_url": "https://api.github.com/users/LsTam91/following{/other_user}",
"gists_url": "https://api.github.com/users/LsTam91/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/LsTam91",
"id": 73234162,
"login": "LsTam91",
"node_id": "MDQ6VXNlcjczMjM0MTYy",
"organizations_url": "https://api.github.com/users/LsTam91/orgs",
"received_events_url": "https://api.github.com/users/LsTam91/received_events",
"repos_url": "https://api.github.com/users/LsTam91/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/LsTam91/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LsTam91/subscriptions",
"type": "User",
"url": "https://api.github.com/users/LsTam91",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2024-01-05T19:10:25Z
| 2024-01-05T19:10:25Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I have updated my dataset by adding a new feature, and push it to the hub. When I want to download it on my machine which contain the old version by using `datasets.load_dataset("your_dataset_name", download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD)` I get an error (paste bellow).
Seems that the load_dataset function still use the old features schema instead of downloading everything new from the HUB.
I find a way to go around this issue by manually deleting the old dataset cache. But from my understanding of `datasets.DownloadMode.FORCE_REDOWNLOAD` option, the dataset cache should be ignored.
### Steps to reproduce the bug
1. Download your dataset in your machine using `datasets.load_dataset`
2. Create a new feature in your dataset and push it to the hub
3. On the same machine redownload your dataset using `datasets.load_dataset("your_dataset_name", download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD)`
### Expected behavior
`
ValueError: Couldn't cast
id: string
level: string
context: list<element: string>
child 0, element: string
type: string
answer: string
question: string
supporting_facts: list<element: string>
child 0, element: string
fra_answer: string
fra_question: string
-- schema metadata --
huggingface: '{"info": {"features": {"id": {"dtype": "string", "_type": "' + 490
to
{'id': Value(dtype='string', id=None), 'level': Value(dtype='string', id=None), 'context': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'type': Value(dtype='string', id=None), 'answer': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None), 'supporting_facts': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}
because column names don't match
The above exception was the direct cause of the following exception:
DatasetGenerationError
...
DatasetGenerationError: An error occurred while generating the dataset`
### Environment info
datasets-2.16.1 huggingface-hub-0.20.2
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6562/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6562/timeline
| null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.