url
string | repository_url
string | labels_url
string | comments_url
string | events_url
string | html_url
string | id
int64 | node_id
string | number
int64 | title
string | user
dict | labels
list | state
string | locked
bool | assignee
dict | assignees
list | milestone
dict | comments
list | created_at
timestamp[ns, tz=UTC] | updated_at
timestamp[ns, tz=UTC] | closed_at
timestamp[ns, tz=UTC] | author_association
string | type
float64 | active_lock_reason
float64 | sub_issues_summary
dict | body
string | closed_by
dict | reactions
dict | timeline_url
string | performed_via_github_app
float64 | state_reason
string | draft
float64 | pull_request
dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/4619
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4619/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4619/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4619/events
|
https://github.com/huggingface/datasets/issues/4619
| 1,292,107,275
|
I_kwDODunzps5NA_4L
| 4,619
|
np arrays get turned into native lists
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/11954789?v=4",
"events_url": "https://api.github.com/users/ZhaofengWu/events{/privacy}",
"followers_url": "https://api.github.com/users/ZhaofengWu/followers",
"following_url": "https://api.github.com/users/ZhaofengWu/following{/other_user}",
"gists_url": "https://api.github.com/users/ZhaofengWu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ZhaofengWu",
"id": 11954789,
"login": "ZhaofengWu",
"node_id": "MDQ6VXNlcjExOTU0Nzg5",
"organizations_url": "https://api.github.com/users/ZhaofengWu/orgs",
"received_events_url": "https://api.github.com/users/ZhaofengWu/received_events",
"repos_url": "https://api.github.com/users/ZhaofengWu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ZhaofengWu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZhaofengWu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ZhaofengWu",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
open
| false
| null |
[] | null |
[
"If you add the line `dataset2.set_format('np')` before calling `dataset2[0]['tmp']` it should return `np.ndarray`.\r\nI believe internally it will not store it as a list, it is only returning a list when you index it.\r\n\r\n```\r\nIn [1]: import datasets, numpy as np\r\nIn [2]: dataset = datasets.load_dataset(\"glue\", \"mrpc\")[\"validation\"]\r\nIn [3]: dataset2 = dataset.map(lambda x: {\"tmp\": np.array([0.5])}, batched=False)\r\nIn [4]: dataset2[0][\"tmp\"]\r\nOut[4]: [0.5]\r\n\r\nIn [5]: dataset2.set_format('np')\r\n\r\nIn [6]: dataset2[0][\"tmp\"]\r\nOut[6]: array([0.5])\r\n```",
"I see, thanks! Any idea if the default numpy → list conversion might cause precision loss?",
"I'm not super familiar with our datasets works internally, but I think your `np` array will be stored in a `pyarrow` format, and then you take a view of this as a python array. In which case, I think the precision should be preserved."
] | 2022-07-02T17:54:57Z
| 2022-07-03T20:27:07Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
## Describe the bug
When attaching an `np.array` field, it seems that it automatically gets turned into a list (see below). Why is this happening? Could it lose precision? Is there a way to make sure this doesn't happen?
## Steps to reproduce the bug
```python
>>> import datasets, numpy as np
>>> dataset = datasets.load_dataset("glue", "mrpc")["validation"]
Reusing dataset glue (...)
100%|███████████████████████████████████████████████| 3/3 [00:00<00:00, 1360.61it/s]
>>> dataset2 = dataset.map(lambda x: {"tmp": np.array([0.5])}, batched=False)
100%|██████████████████████████████████████████| 408/408 [00:00<00:00, 10819.97ex/s]
>>> dataset2[0]["tmp"]
[0.5]
>>> type(dataset2[0]["tmp"])
<class 'list'>
```
## Expected results
`dataset2[0]["tmp"]` should be an `np.ndarray`.
## Actual results
It's a list.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: mac, though I'm pretty sure it happens on a linux machine too
- Python version: 3.9.7
- PyArrow version: 6.0.1
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4619/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4619/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/4903
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4903/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4903/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4903/events
|
https://github.com/huggingface/datasets/pull/4903
| 1,352,539,075
|
PR_kwDODunzps494aud
| 4,903
|
Fix CI reporting
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-08-26T17:16:30Z
| 2022-08-26T17:49:33Z
| 2022-08-26T17:46:59Z
|
MEMBER
| null | null | null |
Fix CI so that it reports defaults (failed and error) besides the custom (xfailed and xpassed) in the test summary.
This PR fixes a regression introduced by:
- #4845
This introduced the reporting of xfailed and xpassed, but wrongly removed the reporting of the defaults failed and error.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4903/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4903/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4903.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4903",
"merged_at": "2022-08-26T17:46:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4903.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4903"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4755
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4755/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4755/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4755/events
|
https://github.com/huggingface/datasets/issues/4755
| 1,319,687,044
|
I_kwDODunzps5OqNOE
| 4,755
|
Datasets.map causes incorrect overflow_to_sample_mapping when used with tokenizers and small batch size
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/662612?v=4",
"events_url": "https://api.github.com/users/srobertjames/events{/privacy}",
"followers_url": "https://api.github.com/users/srobertjames/followers",
"following_url": "https://api.github.com/users/srobertjames/following{/other_user}",
"gists_url": "https://api.github.com/users/srobertjames/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/srobertjames",
"id": 662612,
"login": "srobertjames",
"node_id": "MDQ6VXNlcjY2MjYxMg==",
"organizations_url": "https://api.github.com/users/srobertjames/orgs",
"received_events_url": "https://api.github.com/users/srobertjames/received_events",
"repos_url": "https://api.github.com/users/srobertjames/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/srobertjames/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/srobertjames/subscriptions",
"type": "User",
"url": "https://api.github.com/users/srobertjames",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
open
| false
| null |
[] | null |
[
"I've built a minimal example that shows this bug without `n_proc`. It seems like it's a problem any way of using **tokenizers, `overflow_to_sample_mapping`, and Dataset.map, with a small batch size**:\r\n\r\n```\r\nimport datasets\r\nimport transformers\r\npretrained = 'deepset/tinyroberta-squad2'\r\ntokenizer = transformers.AutoTokenizer.from_pretrained(pretrained)\r\n\r\nquestions = ['Can you tell me why?', 'What time is it?']\r\ncontexts = ['This is context zero', 'Another paragraph goes here'] \r\n\r\ndef tok(questions, contexts):\r\n return tokenizer(text=questions,\r\n text_pair=contexts,\r\n truncation='only_second',\r\n return_overflowing_tokens=True,\r\n )\r\nprint(tok(questions, contexts)['overflow_to_sample_mapping'])\r\nassert tok(questions, contexts)['overflow_to_sample_mapping'] == [0, 1] # PASSES\r\n\r\ndef tok2(d):\r\n return tok(d['question'], d['context'])\r\n\r\ndef tok2(d):\r\n return tok(d['question'], d['context'])\r\n\r\nds = datasets.Dataset.from_dict({'question': questions, 'context': contexts})\r\ntokens = ds.map(tok2, batched=True, batch_size=1)\r\nprint(tokens['overflow_to_sample_mapping'])\r\nassert tokens['overflow_to_sample_mapping'] == [0, 1] # FAILS produces [0,0]\r\n```\r\n\r\nNote that even if the batch size would be larger, there will be instances where we will not have a lot of data, and end up using small batches. This can occur e.g. if `n_proc` causes batches to be underfill. I imagine it can also occur in other ways, e.g. the final leftover batch at the end.",
"A larger batch size does _not_ have this behavior:\r\n\r\n```\r\ndef tok2(d):\r\n return tok(d['question'], d['context'])\r\n\r\nds = datasets.Dataset.from_dict({'question': questions, 'context': contexts})\r\ntokens = ds.map(tok2, batched=True, batch_size=2)\r\nprint(tokens['overflow_to_sample_mapping'])\r\nassert tokens['overflow_to_sample_mapping'] == [0, 1] # PASSES\r\n```",
"I was trying the [Question answering](https://huggingface.co/learn/nlp-course/chapter7/7#question-answering) tutorial on Hugging face when i faced the same problem. The preprocessing step is [here](https://huggingface.co/learn/nlp-course/chapter7/7#processing-the-validation-data). i have changed ```max_length=200, stride=50```,\r\n\r\n```\r\nvalidation_dataset = raw_datasets['validation'].select(range(8)).map(\r\n preprocess_validation_examples,\r\n batched=True,\r\n remove_columns=raw_datasets[\"validation\"].column_names,\r\n num_proc=1\r\n)\r\nprint(validation_dataset['overflow_to_sample_mapping'])\r\nprint(validation_dataset['example_id'])\r\n```\r\nresult\r\n\r\n```\r\n[0, 1, 2, 3, 4, 5, 6, 7]\r\n['56be4db0acb8001400a502ec', '56be4db0acb8001400a502ed', '56be4db0acb8001400a502ee', \r\n'56be4db0acb8001400a502ef', '56be4db0acb8001400a502f0', '56be8e613aeaaa14008c90d1', \r\n'56be8e613aeaaa14008c90d2', '56be8e613aeaaa14008c90d3']\r\n```\r\nwhen ```num_proc=2```, result - \r\n\r\n```\r\n[0, 1, 2, 3, 0, 1, 2, 3]\r\n['56be4db0acb8001400a502ec', '56be4db0acb8001400a502ed', '56be4db0acb8001400a502ee', \r\n'56be4db0acb8001400a502ef', '56be4db0acb8001400a502f0', '56be8e613aeaaa14008c90d1', \r\n'56be8e613aeaaa14008c90d2', '56be8e613aeaaa14008c90d3']\r\n```\r\n\r\nwhen ```num_proc=3```, result - \r\n\r\n```\r\n[0, 1, 2, 0, 1, 2, 0, 1]\r\n['56be4db0acb8001400a502ec', '56be4db0acb8001400a502ed', '56be4db0acb8001400a502ee', \r\n'56be4db0acb8001400a502ef', '56be4db0acb8001400a502f0', '56be8e613aeaaa14008c90d1', \r\n'56be8e613aeaaa14008c90d2', '56be8e613aeaaa14008c90d3']\r\n```\r\n\r\nThe```overflow_to_sample_mapping``` changes with ```num_proc```, but ```example_id``` field remains the same . It seems that each process in ```map``` has its own counter for overflow_to_sample_mapping. If you are using ```overflow_to_sample_mapping``` inside the ```preprocess_validation_examples``` function, then there is no issue."
] | 2022-07-27T14:54:11Z
| 2023-12-13T19:34:43Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
## Describe the bug
When using `tokenizer`, we can retrieve the field `overflow_to_sample_mapping`, since long samples will be overflown into multiple token sequences.
However, when tokenizing is done via `Dataset.map`, with `n_proc > 1`, the `overflow_to_sample_mapping` field is wrong. This seems to be because each tokenizer only looks at its share of the samples, and maps to the index _within its share_, but then `Dataset.map` collates them together.
## Steps to reproduce the bug
1. Make a dataset of 3 strings.
2. Tokenize via Dataset.map with n_proc = 8
3. Inspect the `overflow_to_sample_mapping` field
## Expected results
`[0, 1, 2]`
## Actual results
`[0, 0, 0]`
Notes:
1. I have not yet extracted a minimal example, but the above works reliably
2. If the dataset is large, I've yet to determine if this bug still happens a. not at all b. always c. on the small, leftover batch at the end.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4755/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4755/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5969
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5969/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5969/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5969/events
|
https://github.com/huggingface/datasets/pull/5969
| 1,765,529,905
|
PR_kwDODunzps5Tcgq4
| 5,969
|
Add `encoding` and `errors` params to JSON loader
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006770 / 0.011353 (-0.004583) | 0.004143 / 0.011008 (-0.006865) | 0.098928 / 0.038508 (0.060420) | 0.044893 / 0.023109 (0.021783) | 0.302630 / 0.275898 (0.026732) | 0.368173 / 0.323480 (0.044693) | 0.005631 / 0.007986 (-0.002354) | 0.003397 / 0.004328 (-0.000931) | 0.075748 / 0.004250 (0.071497) | 0.062582 / 0.037052 (0.025530) | 0.329586 / 0.258489 (0.071097) | 0.362625 / 0.293841 (0.068784) | 0.033250 / 0.128546 (-0.095296) | 0.008880 / 0.075646 (-0.066766) | 0.329683 / 0.419271 (-0.089588) | 0.054426 / 0.043533 (0.010893) | 0.297940 / 0.255139 (0.042801) | 0.319796 / 0.283200 (0.036597) | 0.023296 / 0.141683 (-0.118387) | 1.462142 / 1.452155 (0.009987) | 1.495796 / 1.492716 (0.003079) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.201771 / 0.018006 (0.183765) | 0.454514 / 0.000490 (0.454024) | 0.003333 / 0.000200 (0.003133) | 0.000081 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028084 / 0.037411 (-0.009327) | 0.109452 / 0.014526 (0.094926) | 0.119200 / 0.176557 (-0.057357) | 0.180302 / 0.737135 (-0.556834) | 0.125653 / 0.296338 (-0.170686) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.409819 / 0.215209 (0.194610) | 4.055117 / 2.077655 (1.977462) | 1.855279 / 1.504120 (0.351159) | 1.655281 / 1.541195 (0.114086) | 1.687938 / 1.468490 (0.219448) | 0.528352 / 4.584777 (-4.056425) | 3.750250 / 3.745712 (0.004538) | 3.386741 / 5.269862 (-1.883121) | 1.572036 / 4.565676 (-2.993640) | 0.065125 / 0.424275 (-0.359150) | 0.011259 / 0.007607 (0.003652) | 0.513449 / 0.226044 (0.287405) | 5.139421 / 2.268929 (2.870492) | 2.316973 / 55.444624 (-53.127651) | 1.984109 / 6.876477 (-4.892368) | 2.127915 / 2.142072 (-0.014158) | 0.653238 / 4.805227 (-4.151989) | 0.142686 / 6.500664 (-6.357978) | 0.063666 / 0.075469 (-0.011803) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.185174 / 1.841788 (-0.656614) | 14.790282 / 8.074308 (6.715974) | 13.089222 / 10.191392 (2.897830) | 0.146055 / 0.680424 (-0.534369) | 0.017835 / 0.534201 (-0.516366) | 0.399598 / 0.579283 (-0.179685) | 0.425296 / 0.434364 (-0.009068) | 0.478552 / 0.540337 (-0.061786) | 0.579702 / 1.386936 (-0.807234) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006750 / 0.011353 (-0.004603) | 0.004156 / 0.011008 (-0.006853) | 0.074948 / 0.038508 (0.036440) | 0.043368 / 0.023109 (0.020259) | 0.355389 / 0.275898 (0.079491) | 0.429167 / 0.323480 (0.105687) | 0.003911 / 0.007986 (-0.004075) | 0.004340 / 0.004328 (0.000012) | 0.075940 / 0.004250 (0.071689) | 0.054293 / 0.037052 (0.017241) | 0.400317 / 0.258489 (0.141827) | 0.432001 / 0.293841 (0.138160) | 0.032340 / 0.128546 (-0.096206) | 0.008876 / 0.075646 (-0.066770) | 0.082284 / 0.419271 (-0.336987) | 0.050819 / 0.043533 (0.007286) | 0.351994 / 0.255139 (0.096855) | 0.375917 / 0.283200 (0.092717) | 0.022466 / 0.141683 (-0.119217) | 1.538824 / 1.452155 (0.086669) | 1.563995 / 1.492716 (0.071279) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227330 / 0.018006 (0.209323) | 0.446380 / 0.000490 (0.445890) | 0.000408 / 0.000200 (0.000208) | 0.000058 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028534 / 0.037411 (-0.008878) | 0.113467 / 0.014526 (0.098941) | 0.123590 / 0.176557 (-0.052966) | 0.174309 / 0.737135 (-0.562827) | 0.130631 / 0.296338 (-0.165707) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.441020 / 0.215209 (0.225811) | 4.386564 / 2.077655 (2.308909) | 2.100704 / 1.504120 (0.596584) | 1.901484 / 1.541195 (0.360289) | 1.963494 / 1.468490 (0.495004) | 0.536838 / 4.584777 (-4.047939) | 3.739071 / 3.745712 (-0.006642) | 3.278981 / 5.269862 (-1.990881) | 1.515476 / 4.565676 (-3.050201) | 0.066388 / 0.424275 (-0.357887) | 0.011857 / 0.007607 (0.004250) | 0.545507 / 0.226044 (0.319463) | 5.441479 / 2.268929 (3.172550) | 2.602144 / 55.444624 (-52.842480) | 2.235583 / 6.876477 (-4.640894) | 2.293458 / 2.142072 (0.151385) | 0.658535 / 4.805227 (-4.146692) | 0.141327 / 6.500664 (-6.359337) | 0.063726 / 0.075469 (-0.011743) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.247819 / 1.841788 (-0.593968) | 15.234524 / 8.074308 (7.160216) | 14.592700 / 10.191392 (4.401308) | 0.141952 / 0.680424 (-0.538472) | 0.017747 / 0.534201 (-0.516454) | 0.396819 / 0.579283 (-0.182465) | 0.415902 / 0.434364 (-0.018462) | 0.464619 / 0.540337 (-0.075718) | 0.560866 / 1.386936 (-0.826070) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008278 / 0.011353 (-0.003075) | 0.005044 / 0.011008 (-0.005964) | 0.123382 / 0.038508 (0.084874) | 0.054039 / 0.023109 (0.030929) | 0.382338 / 0.275898 (0.106440) | 0.453287 / 0.323480 (0.129807) | 0.006342 / 0.007986 (-0.001644) | 0.003930 / 0.004328 (-0.000398) | 0.094039 / 0.004250 (0.089789) | 0.076525 / 0.037052 (0.039472) | 0.394066 / 0.258489 (0.135577) | 0.445600 / 0.293841 (0.151759) | 0.039348 / 0.128546 (-0.089199) | 0.010485 / 0.075646 (-0.065161) | 0.433730 / 0.419271 (0.014459) | 0.082671 / 0.043533 (0.039138) | 0.375250 / 0.255139 (0.120111) | 0.416269 / 0.283200 (0.133070) | 0.038397 / 0.141683 (-0.103286) | 1.864834 / 1.452155 (0.412680) | 2.010453 / 1.492716 (0.517737) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.240008 / 0.018006 (0.222002) | 0.470975 / 0.000490 (0.470485) | 0.004001 / 0.000200 (0.003801) | 0.000097 / 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031107 / 0.037411 (-0.006304) | 0.129371 / 0.014526 (0.114846) | 0.141559 / 0.176557 (-0.034997) | 0.205571 / 0.737135 (-0.531564) | 0.144611 / 0.296338 (-0.151728) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.506972 / 0.215209 (0.291763) | 5.055951 / 2.077655 (2.978296) | 2.397438 / 1.504120 (0.893318) | 2.170435 / 1.541195 (0.629240) | 2.240296 / 1.468490 (0.771806) | 0.641559 / 4.584777 (-3.943218) | 4.644772 / 3.745712 (0.899060) | 4.064200 / 5.269862 (-1.205662) | 1.946991 / 4.565676 (-2.618685) | 0.086413 / 0.424275 (-0.337862) | 0.015082 / 0.007607 (0.007475) | 0.670413 / 0.226044 (0.444369) | 6.331346 / 2.268929 (4.062418) | 2.965813 / 55.444624 (-52.478812) | 2.547952 / 6.876477 (-4.328524) | 2.718390 / 2.142072 (0.576318) | 0.796657 / 4.805227 (-4.008571) | 0.173229 / 6.500664 (-6.327435) | 0.079606 / 0.075469 (0.004137) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.568761 / 1.841788 (-0.273026) | 18.485432 / 8.074308 (10.411124) | 15.758513 / 10.191392 (5.567121) | 0.170427 / 0.680424 (-0.509997) | 0.021421 / 0.534201 (-0.512780) | 0.518623 / 0.579283 (-0.060660) | 0.525887 / 0.434364 (0.091523) | 0.640331 / 0.540337 (0.099993) | 0.766748 / 1.386936 (-0.620188) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007680 / 0.011353 (-0.003673) | 0.005289 / 0.011008 (-0.005719) | 0.093773 / 0.038508 (0.055265) | 0.054997 / 0.023109 (0.031888) | 0.456277 / 0.275898 (0.180379) | 0.500642 / 0.323480 (0.177162) | 0.005935 / 0.007986 (-0.002050) | 0.004375 / 0.004328 (0.000047) | 0.094131 / 0.004250 (0.089881) | 0.063399 / 0.037052 (0.026347) | 0.470546 / 0.258489 (0.212057) | 0.504989 / 0.293841 (0.211148) | 0.038541 / 0.128546 (-0.090006) | 0.010403 / 0.075646 (-0.065244) | 0.102469 / 0.419271 (-0.316802) | 0.063105 / 0.043533 (0.019572) | 0.466005 / 0.255139 (0.210866) | 0.458677 / 0.283200 (0.175477) | 0.028407 / 0.141683 (-0.113276) | 1.893829 / 1.452155 (0.441675) | 1.917954 / 1.492716 (0.425238) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.272760 / 0.018006 (0.254754) | 0.476159 / 0.000490 (0.475669) | 0.008467 / 0.000200 (0.008267) | 0.000146 / 0.000054 (0.000091) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035755 / 0.037411 (-0.001656) | 0.145038 / 0.014526 (0.130512) | 0.148322 / 0.176557 (-0.028235) | 0.210193 / 0.737135 (-0.526943) | 0.156547 / 0.296338 (-0.139792) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.541204 / 0.215209 (0.325995) | 5.382746 / 2.077655 (3.305091) | 2.704229 / 1.504120 (1.200109) | 2.468422 / 1.541195 (0.927227) | 2.522672 / 1.468490 (1.054182) | 0.644899 / 4.584777 (-3.939878) | 4.654401 / 3.745712 (0.908689) | 2.159223 / 5.269862 (-3.110638) | 1.280098 / 4.565676 (-3.285578) | 0.080053 / 0.424275 (-0.344222) | 0.014383 / 0.007607 (0.006776) | 0.662770 / 0.226044 (0.436725) | 6.617651 / 2.268929 (4.348722) | 3.234347 / 55.444624 (-52.210277) | 2.861417 / 6.876477 (-4.015059) | 2.888928 / 2.142072 (0.746856) | 0.792854 / 4.805227 (-4.012374) | 0.172553 / 6.500664 (-6.328111) | 0.078402 / 0.075469 (0.002933) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.565351 / 1.841788 (-0.276436) | 18.681916 / 8.074308 (10.607608) | 17.264473 / 10.191392 (7.073081) | 0.168461 / 0.680424 (-0.511963) | 0.021353 / 0.534201 (-0.512848) | 0.517843 / 0.579283 (-0.061440) | 0.519907 / 0.434364 (0.085543) | 0.623687 / 0.540337 (0.083350) | 0.761796 / 1.386936 (-0.625140) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006750 / 0.011353 (-0.004603) | 0.004268 / 0.011008 (-0.006741) | 0.098644 / 0.038508 (0.060136) | 0.044643 / 0.023109 (0.021534) | 0.309420 / 0.275898 (0.033522) | 0.379294 / 0.323480 (0.055815) | 0.005729 / 0.007986 (-0.002256) | 0.003615 / 0.004328 (-0.000714) | 0.076086 / 0.004250 (0.071835) | 0.068994 / 0.037052 (0.031942) | 0.325653 / 0.258489 (0.067164) | 0.375187 / 0.293841 (0.081347) | 0.032546 / 0.128546 (-0.096000) | 0.009089 / 0.075646 (-0.066557) | 0.329905 / 0.419271 (-0.089366) | 0.066832 / 0.043533 (0.023300) | 0.299247 / 0.255139 (0.044108) | 0.323460 / 0.283200 (0.040260) | 0.034226 / 0.141683 (-0.107457) | 1.475659 / 1.452155 (0.023505) | 1.556234 / 1.492716 (0.063518) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.292305 / 0.018006 (0.274299) | 0.542584 / 0.000490 (0.542094) | 0.003047 / 0.000200 (0.002847) | 0.000082 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030096 / 0.037411 (-0.007315) | 0.112341 / 0.014526 (0.097815) | 0.124965 / 0.176557 (-0.051591) | 0.183159 / 0.737135 (-0.553976) | 0.131885 / 0.296338 (-0.164453) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426437 / 0.215209 (0.211228) | 4.260984 / 2.077655 (2.183330) | 2.078358 / 1.504120 (0.574238) | 1.877644 / 1.541195 (0.336449) | 2.044036 / 1.468490 (0.575546) | 0.532980 / 4.584777 (-4.051797) | 3.749573 / 3.745712 (0.003860) | 1.944155 / 5.269862 (-3.325706) | 1.090307 / 4.565676 (-3.475370) | 0.065445 / 0.424275 (-0.358830) | 0.011237 / 0.007607 (0.003630) | 0.521448 / 0.226044 (0.295403) | 5.213118 / 2.268929 (2.944189) | 2.507829 / 55.444624 (-52.936795) | 2.177179 / 6.876477 (-4.699297) | 2.351161 / 2.142072 (0.209088) | 0.656775 / 4.805227 (-4.148452) | 0.141207 / 6.500664 (-6.359457) | 0.063286 / 0.075469 (-0.012183) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.190281 / 1.841788 (-0.651506) | 15.327424 / 8.074308 (7.253116) | 13.300695 / 10.191392 (3.109303) | 0.190484 / 0.680424 (-0.489939) | 0.017984 / 0.534201 (-0.516217) | 0.405714 / 0.579283 (-0.173569) | 0.435915 / 0.434364 (0.001551) | 0.494083 / 0.540337 (-0.046254) | 0.600616 / 1.386936 (-0.786320) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006740 / 0.011353 (-0.004613) | 0.004289 / 0.011008 (-0.006719) | 0.076532 / 0.038508 (0.038024) | 0.043305 / 0.023109 (0.020196) | 0.356111 / 0.275898 (0.080213) | 0.434121 / 0.323480 (0.110641) | 0.005599 / 0.007986 (-0.002387) | 0.003461 / 0.004328 (-0.000868) | 0.077097 / 0.004250 (0.072847) | 0.055369 / 0.037052 (0.018317) | 0.367093 / 0.258489 (0.108604) | 0.418801 / 0.293841 (0.124960) | 0.032057 / 0.128546 (-0.096489) | 0.009048 / 0.075646 (-0.066599) | 0.082897 / 0.419271 (-0.336374) | 0.050287 / 0.043533 (0.006754) | 0.352060 / 0.255139 (0.096921) | 0.376278 / 0.283200 (0.093078) | 0.023924 / 0.141683 (-0.117759) | 1.522780 / 1.452155 (0.070626) | 1.578938 / 1.492716 (0.086222) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.287317 / 0.018006 (0.269311) | 0.508490 / 0.000490 (0.508000) | 0.000431 / 0.000200 (0.000231) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031139 / 0.037411 (-0.006272) | 0.113927 / 0.014526 (0.099401) | 0.128147 / 0.176557 (-0.048409) | 0.179712 / 0.737135 (-0.557424) | 0.134364 / 0.296338 (-0.161975) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.452834 / 0.215209 (0.237625) | 4.507944 / 2.077655 (2.430289) | 2.287758 / 1.504120 (0.783638) | 2.091145 / 1.541195 (0.549951) | 2.196228 / 1.468490 (0.727738) | 0.539306 / 4.584777 (-4.045471) | 3.838941 / 3.745712 (0.093228) | 1.908801 / 5.269862 (-3.361060) | 1.139235 / 4.565676 (-3.426442) | 0.066677 / 0.424275 (-0.357599) | 0.011422 / 0.007607 (0.003815) | 0.562966 / 0.226044 (0.336921) | 5.633712 / 2.268929 (3.364784) | 2.788622 / 55.444624 (-52.656002) | 2.438465 / 6.876477 (-4.438012) | 2.523479 / 2.142072 (0.381407) | 0.668730 / 4.805227 (-4.136498) | 0.143977 / 6.500664 (-6.356687) | 0.064661 / 0.075469 (-0.010808) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.291708 / 1.841788 (-0.550080) | 15.573316 / 8.074308 (7.499008) | 14.435099 / 10.191392 (4.243707) | 0.147745 / 0.680424 (-0.532679) | 0.017602 / 0.534201 (-0.516599) | 0.401560 / 0.579283 (-0.177723) | 0.429861 / 0.434364 (-0.004502) | 0.469800 / 0.540337 (-0.070538) | 0.567515 / 1.386936 (-0.819421) |\n\n</details>\n</details>\n\n\n"
] | 2023-06-20T14:28:35Z
| 2023-06-21T13:39:50Z
| 2023-06-21T13:32:22Z
|
COLLABORATOR
| null | null | null |
"Requested" in https://discuss.huggingface.co/t/utf-16-for-datasets/43828/3.
`pd.read_json` also has these parameters, so it makes sense to be consistent.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5969/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5969/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5969.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5969",
"merged_at": "2023-06-21T13:32:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5969.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5969"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4698
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4698/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4698/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4698/events
|
https://github.com/huggingface/datasets/pull/4698
| 1,307,539,585
|
PR_kwDODunzps47i9gN
| 4,698
|
Enable streaming dataset to use the "all" split
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4",
"events_url": "https://api.github.com/users/cakiki/events{/privacy}",
"followers_url": "https://api.github.com/users/cakiki/followers",
"following_url": "https://api.github.com/users/cakiki/following{/other_user}",
"gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cakiki",
"id": 3664563,
"login": "cakiki",
"node_id": "MDQ6VXNlcjM2NjQ1NjM=",
"organizations_url": "https://api.github.com/users/cakiki/orgs",
"received_events_url": "https://api.github.com/users/cakiki/received_events",
"repos_url": "https://api.github.com/users/cakiki/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cakiki/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cakiki",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4698). All of your documentation changes will be reflected on that endpoint.",
"@albertvillanova \r\nAdding the validation split causes these two `assert_called_once` assertions to fail with `AssertionError: Expected 'ArrowWriter' to have been called once. Called 2 times`:\r\n\r\nhttps://github.com/huggingface/datasets/blob/main/tests/test_builder.py#L548-L562\r\n\r\nIt might be better to create a new dummy generator for the streaming tests, WDYT? Alternatively we could test for `self.call_count` equalling 2.",
"@cakiki have you read my comment in the issue page?\r\nhttps://github.com/huggingface/datasets/issues/4637#issuecomment-1175984812",
"Streaming with `split=all` seems to be working, will fix the failing test next",
"Not sure if marking the PR as \"ready for review\" actually notified you, so tagging @albertvillanova just in case :smiley_cat: ",
"cc @lhoestq ",
"Hi @cakiki, still interested in working on this? :) ",
"@albertvillanova So sorry; I have no idea how this slipped through the cracks. Yes, I'd still like to work on this. Is it okay if I DM you on slack?",
"Sure!! And nevermind!"
] | 2022-07-18T07:47:39Z
| 2023-01-19T10:11:38Z
| null |
CONTRIBUTOR
| null | null | null |
Fixes #4637
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4698/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4698/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4698.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4698",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4698.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4698"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7296
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7296/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7296/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7296/events
|
https://github.com/huggingface/datasets/pull/7296
| 2,675,573,974
|
PR_kwDODunzps6ChJIJ
| 7,296
|
Remove upper version limit of fsspec[http]
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cyyever",
"id": 17618148,
"login": "cyyever",
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"repos_url": "https://api.github.com/users/cyyever/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cyyever",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2024-11-20T11:29:16Z
| 2025-03-06T04:47:04Z
| 2025-03-06T04:47:01Z
|
CONTRIBUTOR
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cyyever",
"id": 17618148,
"login": "cyyever",
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"repos_url": "https://api.github.com/users/cyyever/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cyyever",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7296/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7296/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7296.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7296",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7296.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7296"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4626
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4626/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4626/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4626/events
|
https://github.com/huggingface/datasets/issues/4626
| 1,293,256,269
|
I_kwDODunzps5NFYZN
| 4,626
|
Add non-commercial licensing info for datasets for which we removed tags
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"yep plus `license_details` also makes sense for this IMO"
] | 2022-07-04T14:32:43Z
| 2022-07-08T14:27:29Z
| null |
MEMBER
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
We removed several YAML tags saying that certain datasets can't be used for commercial purposes: https://github.com/huggingface/datasets/pull/4613#discussion_r911919753
Reason for this is that we only allow tags that are part of our [supported list of licenses](https://github.com/huggingface/datasets/blob/84fc3ad73c85de4eda5d152dfede7671491449cb/src/datasets/utils/resources/standard_licenses.tsv)
We should update the Licensing Information section of the concerned dataset cards, now that the non-commercial tag doesn't exist anymore for certain datasets
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4626/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4626/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/7128
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7128/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7128/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7128/events
|
https://github.com/huggingface/datasets/issues/7128
| 2,490,274,775
|
I_kwDODunzps6UbpPX
| 7,128
|
Filter Large Dataset Entry by Entry
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/36057290?v=4",
"events_url": "https://api.github.com/users/QiyaoWei/events{/privacy}",
"followers_url": "https://api.github.com/users/QiyaoWei/followers",
"following_url": "https://api.github.com/users/QiyaoWei/following{/other_user}",
"gists_url": "https://api.github.com/users/QiyaoWei/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/QiyaoWei",
"id": 36057290,
"login": "QiyaoWei",
"node_id": "MDQ6VXNlcjM2MDU3Mjkw",
"organizations_url": "https://api.github.com/users/QiyaoWei/orgs",
"received_events_url": "https://api.github.com/users/QiyaoWei/received_events",
"repos_url": "https://api.github.com/users/QiyaoWei/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/QiyaoWei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/QiyaoWei/subscriptions",
"type": "User",
"url": "https://api.github.com/users/QiyaoWei",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[
"Hi ! you can do\r\n\r\n```python\r\nfiltered_dataset = dataset.filter(filter_function)\r\n```\r\n\r\non a subset:\r\n\r\n```python\r\nfiltered_subset = dataset.select(range(10_000)).filter(filter_function)\r\n```\r\n",
"Jumping on this as it seems relevant - when I use the `filter` method, it often results in an OOM (or at least unacceptably high memory usage).\r\n\r\nFor example in the [this notebook](https://colab.research.google.com/drive/1N_rWko6jzGji3j_ayDR7ngT5lf4P8at_), we load an object detection dataset from HF and imagine I want to filter such that I only have images which contain a single annotation class. Each row has a JSON field that contains MS-COCO annotations for the image, so we could load that field and filter on it.\r\n\r\nThe test dataset is only about 440 images, probably less than 1GB, but running the following filter crashes the VM (over 12 GB RAM):\r\n\r\n```python\r\nimport json\r\ndef filter_single_class(example, target_class_id):\r\n \"\"\"Filters examples based on whether they contain annotations from a single class.\r\n\r\n Args:\r\n example: A dictionary representing a single example from the dataset.\r\n target_class_id: The target class ID to filter for.\r\n\r\n Returns:\r\n True if the example contains only annotations from the target class, False otherwise.\r\n \"\"\"\r\n if not example['coco_annotations']:\r\n return False\r\n\r\n annotation_category_ids = set([annotation['category_id'] for annotation in json.loads(example['coco_annotations'])])\r\n\r\n return len(annotation_category_ids) == 1 and target_class_id in annotation_category_ids\r\n\r\ntarget_class_id = 1 \r\nfiltered_dataset = dataset['test'].filter(lambda example: filter_single_class(example, target_class_id))\r\n```\r\n\r\n<img width=\"255\" alt=\"image\" src=\"https://github.com/user-attachments/assets/be475f15-5b6b-4df2-b5b5-a1f60ae2b05c\">\r\n\r\nIterating over the dataset works fine:\r\n\r\n```python\r\nfiltered_dataset = []\r\nfor example in dataset['test']:\r\n if filter_single_class(example, target_class_id):\r\n filtered_dataset.append(example)\r\n```\r\n\r\n<img width=\"129\" alt=\"image\" src=\"https://github.com/user-attachments/assets/34fa5612-0394-4c46-9f34-e94650f05d65\">\r\n\r\nIt would be great if there was guidance in the documentation on how to use filters efficiently, or if this is some performance bug that could be addressed. At the very least I would expect a filter operation to use at most 2x the footprint of the database plus some overhead for the lambda (i.e. worst case would be a duplicate copy with all entries retained). Even if the operation is parallelised, each thread/worker should only take a subset of the dataset - so I'm not sure where this ballooning in memory usage comes from.\r\n\r\nFrom some other comments there seems to be a workaround with `writer_batch_size` or caching to file, but in the [docs](https://huggingface.co/docs/datasets/v3.0.0/en/package_reference/main_classes#datasets.Dataset.filter) at least, `keep_in_memory` defaults to `False`.",
"You can try passing input_columns=[\"coco_annotations\"] to only load this column instead of all the columns. In that case your function should take coco_annotations as input instead of example",
"If your filter_function is large and computationally intensive, consider using multi-processing or multi-threading with concurrent.futures to filter the dataset. This approach allows you to process multiple tables concurrently, reducing overall processing time, especially for CPU-bound tasks. Use ThreadPoolExecutor for I/O-bound operations and ProcessPoolExecutor for CPU-bound operations.\r\n"
] | 2024-08-27T20:31:09Z
| 2024-10-07T23:37:44Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
I am not sure if this is a new feature, but I wanted to post this problem here, and hear if others have ways of optimizing and speeding up this process.
Let's say I have a really large dataset that I cannot load into memory. At this point, I am only aware of `streaming=True` to load the dataset. Now, the dataset consists of many tables. Ideally, I would want to have some simple filtering criterion, such that I only see the "good" tables. Here is an example of what the code might look like:
```
dataset = load_dataset(
"really-large-dataset",
streaming=True
)
# And let's say we process the dataset bit by bit because we want intermediate results
dataset = islice(dataset, 10000)
# Define a function to filter the data
def filter_function(table):
if some_condition:
return True
else:
return False
# Use the filter function on your dataset
filtered_dataset = (ex for ex in dataset if filter_function(ex))
```
And then I work on the processed dataset, which would be magnitudes faster than working on the original. I would love to hear if the problem setup + solution makes sense to people, and if anyone has suggestions!
### Motivation
See description above
### Your contribution
Happy to make PR if this is a new feature
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7128/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7128/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6081
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6081/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6081/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6081/events
|
https://github.com/huggingface/datasets/pull/6081
| 1,824,486,278
|
PR_kwDODunzps5WjU0k
| 6,081
|
Deprecate `Dataset.export`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006680 / 0.011353 (-0.004673) | 0.003987 / 0.011008 (-0.007021) | 0.084677 / 0.038508 (0.046169) | 0.076800 / 0.023109 (0.053691) | 0.358338 / 0.275898 (0.082440) | 0.386573 / 0.323480 (0.063094) | 0.005370 / 0.007986 (-0.002616) | 0.003323 / 0.004328 (-0.001005) | 0.064238 / 0.004250 (0.059988) | 0.057859 / 0.037052 (0.020806) | 0.355408 / 0.258489 (0.096919) | 0.388302 / 0.293841 (0.094461) | 0.030784 / 0.128546 (-0.097762) | 0.008381 / 0.075646 (-0.067266) | 0.287971 / 0.419271 (-0.131300) | 0.053078 / 0.043533 (0.009545) | 0.352719 / 0.255139 (0.097580) | 0.370319 / 0.283200 (0.087119) | 0.023064 / 0.141683 (-0.118619) | 1.480661 / 1.452155 (0.028507) | 1.555711 / 1.492716 (0.062995) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211289 / 0.018006 (0.193283) | 0.466957 / 0.000490 (0.466467) | 0.003760 / 0.000200 (0.003561) | 0.000076 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028552 / 0.037411 (-0.008859) | 0.084469 / 0.014526 (0.069943) | 0.096027 / 0.176557 (-0.080529) | 0.152170 / 0.737135 (-0.584965) | 0.096513 / 0.296338 (-0.199825) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.382940 / 0.215209 (0.167731) | 3.841735 / 2.077655 (1.764080) | 1.850575 / 1.504120 (0.346455) | 1.676554 / 1.541195 (0.135360) | 1.765241 / 1.468490 (0.296751) | 0.482131 / 4.584777 (-4.102646) | 3.512739 / 3.745712 (-0.232973) | 3.977042 / 5.269862 (-1.292820) | 2.387568 / 4.565676 (-2.178109) | 0.056657 / 0.424275 (-0.367618) | 0.007283 / 0.007607 (-0.000324) | 0.468193 / 0.226044 (0.242149) | 4.704077 / 2.268929 (2.435149) | 2.373467 / 55.444624 (-53.071157) | 2.002470 / 6.876477 (-4.874007) | 2.228280 / 2.142072 (0.086208) | 0.576908 / 4.805227 (-4.228320) | 0.132000 / 6.500664 (-6.368664) | 0.060544 / 0.075469 (-0.014926) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.256168 / 1.841788 (-0.585619) | 19.965458 / 8.074308 (11.891150) | 14.521435 / 10.191392 (4.330043) | 0.159156 / 0.680424 (-0.521268) | 0.018170 / 0.534201 (-0.516031) | 0.393019 / 0.579283 (-0.186264) | 0.415002 / 0.434364 (-0.019362) | 0.471810 / 0.540337 (-0.068528) | 0.658907 / 1.386936 (-0.728029) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006836 / 0.011353 (-0.004517) | 0.004067 / 0.011008 (-0.006942) | 0.066242 / 0.038508 (0.027734) | 0.078601 / 0.023109 (0.055491) | 0.369371 / 0.275898 (0.093473) | 0.402026 / 0.323480 (0.078546) | 0.006097 / 0.007986 (-0.001889) | 0.003337 / 0.004328 (-0.000991) | 0.065854 / 0.004250 (0.061603) | 0.057665 / 0.037052 (0.020612) | 0.379709 / 0.258489 (0.121219) | 0.406868 / 0.293841 (0.113027) | 0.031946 / 0.128546 (-0.096600) | 0.008691 / 0.075646 (-0.066955) | 0.071430 / 0.419271 (-0.347841) | 0.049518 / 0.043533 (0.005986) | 0.370439 / 0.255139 (0.115300) | 0.389235 / 0.283200 (0.106036) | 0.023730 / 0.141683 (-0.117953) | 1.509035 / 1.452155 (0.056880) | 1.548890 / 1.492716 (0.056173) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229264 / 0.018006 (0.211258) | 0.445801 / 0.000490 (0.445312) | 0.000363 / 0.000200 (0.000163) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032377 / 0.037411 (-0.005034) | 0.091082 / 0.014526 (0.076556) | 0.104816 / 0.176557 (-0.071740) | 0.161040 / 0.737135 (-0.576095) | 0.105165 / 0.296338 (-0.191173) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.411012 / 0.215209 (0.195803) | 4.097256 / 2.077655 (2.019602) | 2.088686 / 1.504120 (0.584566) | 1.934429 / 1.541195 (0.393234) | 2.027387 / 1.468490 (0.558896) | 0.476262 / 4.584777 (-4.108515) | 3.518416 / 3.745712 (-0.227296) | 3.260919 / 5.269862 (-2.008943) | 2.041441 / 4.565676 (-2.524235) | 0.056302 / 0.424275 (-0.367973) | 0.007750 / 0.007607 (0.000143) | 0.489966 / 0.226044 (0.263922) | 4.915844 / 2.268929 (2.646916) | 2.617001 / 55.444624 (-52.827623) | 2.333557 / 6.876477 (-4.542920) | 2.484530 / 2.142072 (0.342458) | 0.572009 / 4.805227 (-4.233219) | 0.142557 / 6.500664 (-6.358107) | 0.066711 / 0.075469 (-0.008758) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.359929 / 1.841788 (-0.481859) | 20.332252 / 8.074308 (12.257943) | 14.585842 / 10.191392 (4.394450) | 0.170498 / 0.680424 (-0.509926) | 0.018450 / 0.534201 (-0.515751) | 0.395449 / 0.579283 (-0.183834) | 0.409666 / 0.434364 (-0.024698) | 0.467937 / 0.540337 (-0.072401) | 0.616078 / 1.386936 (-0.770858) |\n\n</details>\n</details>\n\n\n"
] | 2023-07-27T14:22:18Z
| 2023-07-28T11:09:54Z
| 2023-07-28T11:01:04Z
|
COLLABORATOR
| null | null | null |
Deprecate `Dataset.export` that generates a TFRecord file from a dataset as this method is undocumented, and the usage seems low. Users should use [TFRecordWriter](https://www.tensorflow.org/api_docs/python/tf/io/TFRecordWriter#write) or the official [TFRecord](https://www.tensorflow.org/tutorials/load_data/tfrecord) tutorial (on which this method is based) to write TFRecord files instead.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6081/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6081/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6081.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6081",
"merged_at": "2023-07-28T11:01:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6081.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6081"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7022
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7022/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7022/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7022/events
|
https://github.com/huggingface/datasets/issues/7022
| 2,388,064,650
|
I_kwDODunzps6OVvmK
| 7,022
|
There is dead code after we require pyarrow >= 15.0.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks",
"id": 4296013012,
"name": "maintenance",
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null |
[] | 2024-07-03T08:52:57Z
| 2024-07-03T09:17:36Z
| 2024-07-03T09:17:36Z
|
MEMBER
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
There are code lines specific for pyarrow versions < 15.0.0.
However, we require pyarrow >= 15.0.0 since the merge of PR:
- #6892
Those code lines are now dead code and should be removed.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7022/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7022/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6481
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6481/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6481/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6481/events
|
https://github.com/huggingface/datasets/issues/6481
| 2,032,650,003
|
I_kwDODunzps55J8cT
| 6,481
|
using torchrun, save_to_disk suddenly shows SIGTERM
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/85916625?v=4",
"events_url": "https://api.github.com/users/Ariya12138/events{/privacy}",
"followers_url": "https://api.github.com/users/Ariya12138/followers",
"following_url": "https://api.github.com/users/Ariya12138/following{/other_user}",
"gists_url": "https://api.github.com/users/Ariya12138/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Ariya12138",
"id": 85916625,
"login": "Ariya12138",
"node_id": "MDQ6VXNlcjg1OTE2NjI1",
"organizations_url": "https://api.github.com/users/Ariya12138/orgs",
"received_events_url": "https://api.github.com/users/Ariya12138/received_events",
"repos_url": "https://api.github.com/users/Ariya12138/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Ariya12138/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ariya12138/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Ariya12138",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2023-12-08T13:22:03Z
| 2023-12-08T13:22:03Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
When I run my code using the "torchrun" command, when the code reaches the "save_to_disk" part, suddenly I get the following warning and error messages:
Because the dataset is too large, the "save_to_disk" function splits it into 70 parts for saving. However, an error occurs suddenly when it reaches the 14th shard.
WARNING: torch.distributed.elastic.multiprocessing.api: Sending process 2224968 closing signal SIGTERM
ERROR: torch.distributed.elastic.multiprocessing.api: failed (exitcode: -7). traceback: Signal 7 (SIGBUS) received by PID 2224967.
### Steps to reproduce the bug
ds_shard = ds_shard.map(map_fn, *args, **kwargs)
ds_shard.save_to_disk(ds_shard_filepaths[rank])
Saving the dataset (14/70 shards): 20%|██ | 875350/4376702 [00:19<01:53, 30863.15 examples/s]
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2224968 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -7) local_rank: 0 (pid: 2224967) of binary: /home/bingxing2/home/scx6964/.conda/envs/ariya235/bin/python
Traceback (most recent call last):
File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
return f(*args, **kwargs)
File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/run.py", line 794, in main
run(args)
File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/run.py", line 785, in run
elastic_launch(
File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 134, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
==========================================================
run.py FAILED
----------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
----------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2023-12-08_20:09:04
rank : 0 (local_rank: 0)
exitcode : -7 (pid: 2224967)
error_file: <N/A>
traceback : Signal 7 (SIGBUS) received by PID 2224967
### Expected behavior
I hope it can save successfully without any issues, but it seems there is a problem.
### Environment info
`datasets` version: 2.14.6
- Platform: Linux-4.19.90-24.4.v2101.ky10.aarch64-aarch64-with-glibc2.28
- Python version: 3.10.11
- Huggingface_hub version: 0.17.3
- PyArrow version: 14.0.0
- Pandas version: 2.1.2
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6481/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6481/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/4881
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4881/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4881/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4881/events
|
https://github.com/huggingface/datasets/issues/4881
| 1,348,495,777
|
I_kwDODunzps5QYGmh
| 4,881
|
Language names and language codes: connecting to a big database (rather than slow enrichment of custom list)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6072524?v=4",
"events_url": "https://api.github.com/users/alexis-michaud/events{/privacy}",
"followers_url": "https://api.github.com/users/alexis-michaud/followers",
"following_url": "https://api.github.com/users/alexis-michaud/following{/other_user}",
"gists_url": "https://api.github.com/users/alexis-michaud/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alexis-michaud",
"id": 6072524,
"login": "alexis-michaud",
"node_id": "MDQ6VXNlcjYwNzI1MjQ=",
"organizations_url": "https://api.github.com/users/alexis-michaud/orgs",
"received_events_url": "https://api.github.com/users/alexis-michaud/received_events",
"repos_url": "https://api.github.com/users/alexis-michaud/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alexis-michaud/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexis-michaud/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alexis-michaud",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[
"Thanks for opening this discussion, @alexis-michaud.\r\n\r\nAs the language validation procedure is shared with other Hugging Face projects, I'm tagging them as well.\r\n\r\nCC: @huggingface/moon-landing ",
"on the Hub side, there is not fine grained validation we just check that `language:` contains an array of lowercase strings between 2 and 3 characters long =)\r\n\r\nand for `language_bcp47:` we just check it's an array of strings.\r\n\r\nThe only page where we have a hardcoded list of languages is https://huggingface.co/languages and I've been thinking of hooking that page on an external database of languages (so any suggestion is super interesting), but it's not used for validation.\r\n\r\nThat being said, in `datasets` this file https://github.com/huggingface/datasets/blob/main/src/datasets/utils/resources/languages.json is not really used no? Or just in the tagging tool? What about just removing it?\r\n\r\nalso cc'ing @lbourdois who's been active and helpful on those subjects in the past!",
"PS @alexis-michaud is there a DB of language codes you would recommend? That would contain all `ISO 639-1, 639-2 or 639-3 codes` and be kept up to date, and ideally that would be accessible as a Node.js npm package?\r\n\r\ncc @albertvillanova too",
"> PS @alexis-michaud is there a DB of language codes you would recommend? That would contain all `ISO 639-1, 639-2 or 639-3 codes` and be kept up to date, and ideally that would be accessible as a Node.js npm package?\r\n> \r\n> cc @albertvillanova too\r\n\r\nMany thanks for your answer! \r\n\r\nThe Glottolog database is kept up to date, and has information on the closest ISO code for each Glottocode. So providing a clean table with equivalences sounds (to me) like something perfectly reasonable to expect from their team. \r\nTo what extent would [pyglottolog](https://github.com/glottolog/pyglottolog) fit the bill / do the job? (API documentation [here](https://pyglottolog.readthedocs.io/en/latest/index.html)) I'm reaching my technical limitations here: I can't assess the distance between what they offer and what the HF team needs. \r\nI have opened an Issue in [their repo](https://github.com/glottolog/glottolog-cldf/issues/13). \r\n\r\nVery interested to see where it goes from there.",
"I just tried pyglottolog to generate a file with all the current IDs (first column).\r\n\r\n`glottolog languoids` inside the `glottolog` repository.\r\n\r\n[glottolog-languoids-v4.6-10-g5c66eec874.csv](https://github.com/huggingface/datasets/files/9417456/glottolog-languoids-v4.6-10-g5c66eec874.csv)\r\n\r\n",
"Greetings @alexis-michaud and others,\r\nI think perhaps a standards-based approach here would help everyone out both at the technical and social layers of technical innovations. \r\n\r\nLet me say a few things: \r\n1. there are multiple kinds of assets in AI that should have associated language codes. \r\n * AI Training Data sets\r\n * AI models\r\n * AI outputs\r\nThese are all distinct components which should be tagged for the language and encoding methods they operate on or enhance. For example, an AI based cross-language tool from French to English (UK variety) still needs to consider if it is operating on oral language speech or written text. This is where [IANA language sub-tags](https://www.iana.org/assignments/language-subtag-registry/language-subtag-registry) come in and are so important. I link to the official source. If one wants to use middleware such as a python package or npm package to manage strings then please make sure those packages are updating codes as they are being revised. I see that @julien-c mentioned BCP-47. BCP-47 is the current standard for language tagging. Following it will make the resources you create more findable and let future users better understand or expect any biases which may have been introduced in the different AI based products.\r\n2. BCP-47 is a technical read. However, you will notice that it identifies when to use an ISO 639-1, ISO 639-2, or ISO 639-3. code. This is important for interoperability with many systems. If you are using library systems then you should likely just stick with ISO 639-3 codes.\r\n3. If you are going to use Glottolog codes use them after an `-x-` tag in the BCP-47 format to maintain BCP-47 validity. \r\n4. You should source ISO 639-3 codes directly from the [ISO 639-3 registrar](https://iso639-3.sil.org/code_tables/639/data) as these codes are updated annually, usually in February or March. ISO 639-3 codes have multiple classes: `Active`, `Deprecated`, and `Unassigned`. This means that string length checking is not a sufficient strategy for validation.\r\n5. The names of smaller languages often change depending on the language used to describe them. The [ISO639-2 documentation](https://www.loc.gov/standards/iso639-2/php/code_list.php) has a list of language names for languages with smaller populations for languages in which descriptions about these languages are often written. For example, ISO 639-2's documentation contains the names of languages as they are used in French, German, and English. ISO 639-2 rarely is updated as it is now tied to ISO 639-3's evolution and modern systems should just use ISO 639-3, but these additional names of languages in other languages may not appear in the ISO 369-3 tables.\r\n6. Glottolog codes are also updated at least annually. Usually sometime after ISO 639-3 updates.\r\n7. Please, if the material is in a written mode, please indicate which script is used unless the IANA field has a `suppress script` value. Please use the script tag that BCP-47 calls for from [ISO 15924](https://unicode.org/iso15924/iso15924-codes.html). This also updates at least annually. \r\n8. Another great place to look for language names is the [Unicode CLDR database for locales](https://cldr.unicode.org/translation/displaynames/languagelocale-names). These ought to be congruent with ISO 639-3 but, sometimes CLDR has additional references to languages (such as the french name for a language) which is not contained in ISO 639-2 or ISO 639-3.\r\n9. Wikidata for language names is not always a great source of authoritative information. Language names are asymmetrical. Many times they are contrived because there is no actual name for the language in the language referring... e.g. French doesn't have a name for every language in the world, often they say something like: the language of 'x' people. — English does the same. When a language name standard does not have the best name for a language the best way to handle that is to make a change request with the standards registrar. Keeping track of the source list and the version of your source list for your language codes is very important. \r\n10. Finally, It would be a great service to technologist, minority language communities, and linguists if for all resources of the three types mentioned in number 1 above you added a record to [OLAC](http://www.language-archives.org/). — I can help you with that. OLAC is a search interface for language resources.\r\n",
"Hi everybody!\r\n\r\nAbout the point:\r\n> also cc'ing @lbourdois who's been active and helpful on those subjects in the past!\r\n\r\nDiscussions on the need to improve the Hub's tagging system (applying to both datasets and models) can be found in the following discussion: https://github.com/huggingface/hub-docs/issues/193\r\nOnce this system has been redone and satisfies the identified needs, a redesign of the [Languages page](https://huggingface.co/languages) would also be relevant: https://github.com/huggingface/hub-docs/issues/194. \r\nI invite you to read them. But as a quick summary, the exchanges were oriented towards the ISO standard (the first HF system was based on it and it is generally the standard indicated in AI/DL papers) by favouring ISO 639-1 if it exists, and fallback to ISO 639-2 or ISO 639-3 if it doesn't. In addition, it is possible to add BCP-47 tags to consider existing varieties/regionalisms within a language (https://huggingface.co/datasets/AmazonScience/massive/discussions/1). If a language does not belong to either of these two standards, then a request should be made to the HF team to add it manually.\r\n\r\n\r\nTo return to the present discussion, thank you for the various databases and methodologies you mention. It makes a big difference to have linguists in the loop 🚀.\r\n\r\nI have a couple of questions where I think an expert perspective would be appreciated:\r\n- Do you think it's possible to easily handle tags that have been deprecated potentially for decades?\r\nFor example (I'm taking the case of Hebrew but this has happened for other languages) I tagged Google models with the \"iw\" [tag](https://huggingface.co/models?language=iw&sort=downloads) because I based it on what the authors gave in their [paper](https://arxiv.org/pdf/2010.11934.pdf) see table 6 page 12). It turns out that this ISO tag has in fact been deprecated since 1989 in favour of the \"he\" tag. It would therefore be necessary to have a verification that transforms the old tags into the most recent ones.\r\n\r\n- When you look up a language on Wikipedia, it usually shows, in addition to the ISO standard, the codes in the Glottolog (which you have already mentioned), [ELP](https://www.endangeredlanguages.com/?hl=en) and [Linguasphere](http://www.linguasphere.info/jr/index.php?l1=home&l2=welcome) databases. Would you have any opinion about these two other databases?\r\n\r\n- On the Hub, there is the following dataset where French people speak in English: https://huggingface.co/datasets/Datatang/French_Speaking_English_Speech_Data_by_Mobile_Phone \r\nIs there a database to take this case into account? I have not found any code in the Glottolog database. If based on an IETF BCP-47 standard, I would tend to tag the dataset with \"en-fr\" but would this be something accepted by linguists?\r\nBased on the first post in this thread that there are about 8000 languages, if one considers that a given language can be pronounced by a speaker of the other 7999, that would theoretically make about 64 million BCP-47 language1-language2 codes existing. And even much more if we consider regionalists with language1_regionalism_x-language2_regionalism_y. I guess there is no such database.\r\n\r\n- Are there any databases that take into account all the existing sign languages in the world?\r\nIt would be nice to have them included on the Hub.\r\n\r\n- Is there an international classification of languages?\r\nA bit like the [International Classification of Diseases](https://en.wikipedia.org/wiki/International_Classification_of_Diseases) in medicine, which is established by the WHO and used as a reference throughout the world. The idea would be to have a precise number of languages to which we would then have to assign a unique tag in order to find them later. \r\n\r\n- Finally for the CNRS team, when can we expect to see all the datasets of [Pangloss](https://pangloss.cnrs.fr/) on HF? 👀 And I don't know if you have a way to help to add also the datasets of [CoCoON](https://cocoon.huma-num.fr/exist/crdo/).",
"> I invite you to read them. But as a quick summary, the exchanges were oriented towards the ISO standard (the first HF system was based on it and it is generally the standard indicated in AI/DL papers) by favouring ISO 639-1 if it exists, and fallback to ISO 639-2 or ISO 639-3 if it doesn't. In addition, it is possible to add BCP-47 tags to consider existing varieties/regionalisms within a language (https://huggingface.co/datasets/AmazonScience/massive/discussions/1). If a language does not belong to either of these two standards, then a request should be made to the HF team to add it manually.\r\n\r\nOne comment on this fall back system (which generally follows the BCP-47 process). ISO 639-2 has some codes which refer to a language ambiguously. For example, I believe code `ara` is used for arabic. In some contexts arabic is considered a single language, however, Egyptian Arabic is quite different from Moroccan Arabic, which are both considered separate languages. These ambiguous codes are valid ISO 639-3 codes but they have a special status. They are called `macro codes`. They exist inside the ISO 639-3 standard to provide absolute fallback compatibility between ISO 639-2 and ISO 639-3. However, when considering AI and MT applications with language data, the unforeseen potential applications and the potential for bias using macro codes should be avoided for new applications of language tags to resources. For historical cases where it is not clear what resources were used to create the AI tools or datasets then I understand the use of ambiguous tag uses. So for clarity in language tagging I suggest:\r\n\r\n1. Strictly following BCP-47\r\n2. Whenever possible avoid the use of macro tags in the ISO 639-3 standard. These are BCP-47 valid, but could introduce biases in the application of their use in society. (Generally there are more specific tags available to use in the ISO 639-3 standard.)",
"> * Are there any databases that take into account all the existing sign languages in the world?\r\n> It would be nice to have them included on the Hub.\r\n\r\nSign Languages present an interesting case. As I understand the situation. The identification of sign languages has been identified as a component of their endangerment. Some sign languages do exist in ISO 639-3. For further discussion on the issue I refer readers to the following publications: \r\n\r\n* https://doi.org/10.3390/languages7010049\r\n* https://www.academia.edu/35870983/The_ethics_of_of_language_identification_and_ISO_639\r\n\r\nOne way to be BCP-47 compliant and identify a sign language which is not identified in any of the BCP-47 referenced standards is to use the ISO 639-3 code for undetermined language `und` and then apply a custom suffix indicator (as explained in BCP-47) `-x-` and a custom code, such as the ones used in https://doi.org/10.3390/languages7010049",
"> * Is there an international classification of languages?\r\n> A bit like the [International Classification of Diseases](https://en.wikipedia.org/wiki/International_Classification_of_Diseases) in medicine, which is established by the WHO and used as a reference throughout the world. The idea would be to have a precise number of languages to which we would then have to assign a unique tag in order to find them later.\r\n\r\nYes that would be the function of ISO 639-3. It is the reference standard for languages. It includes a code and its name and the status of the code. Many technical metadata standards for file and computer interoperability reference it, many technical library metadata standards reference it. Some linguists use it. Many governments reference it. \r\n\r\nIndexing diseases are different from indexing languages in several ways, one way is that diseases are the impact of a pathogen not the pathogen itself. If we take COVID-19 as an example, there are many varieties of the pathogen but broadly speaking there is only one disease — with many symptoms.\r\n\r\n",
">* When you look up a language on Wikipedia, it usually shows, in addition to the ISO standard, the codes in the Glottolog (which you have already mentioned), [ELP](https://www.endangeredlanguages.com/?hl=en) and [Linguasphere](http://www.linguasphere.info/jr/index.php?l1=home&l2=welcome) databases. Would you have any opinion about these two other databases?\r\n\r\nWhile these do appear on wikipedia, I don't know of any information system which uses these codes. I do know that glottolog did import ELP data at one time and its database does contain ELP data I'm not sure if Glottolog regularly ingests new versions of ELP data. I suspect that the use of Linguasphere data may be relevant to users of wikidata as a linked data attribute but I haven't heard of any linked data projects using Linguasphere data for analysis or product development. My impression is that it is fairly unused.",
"> * Do you think it's possible to easily handle tags that have been deprecated potentially for decades?\r\n>For example (I'm taking the case of Hebrew but this has happened for other languages) I [tag](https://huggingface.co/models?language=iw&sort=downloads)ged Google models with the \"iw\" tag because I based it on what the authors gave in their [paper](https://arxiv.org/pdf/2010.11934.pdf) see table 6 page 12). It turns out that this ISO tag has in fact been deprecated since 1989 in favour of the \"he\" tag. It would therefore be necessary to have a verification that transforms the old tags into the most recent ones.\r\n\r\nYes. You can parse the IANA file linked to above (it is regularly updated). All deprecated tags are marked as such in that file. The new prefered tag if there is one, is indicated. ISO 639-3 also indicates a code's status but their list is relevant only codes within their domain (ISO 639-3).",
"> * On the Hub, there is the following dataset where French people speak in English: https://huggingface.co/datasets/Datatang/French_Speaking_English_Speech_Data_by_Mobile_Phone\r\nIs there a database to take this case into account? I have not found any code in the Glottolog database. If based on an IETF BCP-47 standard, I would tend to tag the dataset with \"en-fr\" but would this be something accepted by linguists?\r\n\r\nI would interpret `en-fr` as english as spoken in France. `fr`in this position refers to the geo-political entity not a second language. I see no reason that other linguists should have a different option after having read BCP-47 and understood how it works.\r\n\r\nThe functional goal here is to tag a language resource as being produced by nonnative speakers, while tagging both languages. There are several problems here. The first is that BCP-47 has no way explicit way to do this. One could use the sub code `x-` with a private use code to indicate a second language and infer some meaning as to that language's role. However, there is another problem here which complexifies the situation greatly... how do we know that those english speakers (in France, or from France, or who were native French speakers) were not speaking their third or fourth language rather than their second language. So to conceptualize a sub-tag which indicates the first language of a speech act for speakers in a second (or other) language would need to be carefully crafted. It might then be proposed to the appropriate authorities. For example three sub-tags exist.\r\n\r\nThere are three registered sub-tags out of a BCP-47 allowed 35. These are `x-`, `u-`, and `t-`. `u-` and `t-` are defined in [RFC6067 ](https://www.rfc-editor.org/rfc/rfc6067)and [RFC6497](https://www.rfc-editor.org/rfc/rfc6497) . For more information see the [Unicode CLDR documentation](https://cldr.unicode.org/index/bcp47-extension) where it says: \r\n\r\n\r\n>[IETF BCP 47 ](http://www.google.com/url?q=http%3A%2F%2Ftools.ietf.org%2Fhtml%2Fbcp47&sa=D&sntz=1&usg=AOvVaw1DoMN1IBGg-JHgECBvdW1t)[Tags for Identifying Languages](http://www.google.com/url?q=http%3A%2F%2Ftools.ietf.org%2Fhtml%2Fbcp47&sa=D&sntz=1&usg=AOvVaw1DoMN1IBGg-JHgECBvdW1t) defines the language identifiers (tags) used on the Internet and in many standards. It has an extension mechanism that allows additional information to be included. The Unicode Consortium is the maintainer of the extension ‘u’ for Locale Extensions, as described in [rfc6067](https://www.google.com/url?q=https%3A%2F%2Ftools.ietf.org%2Fhtml%2Frfc6067&sa=D&sntz=1&usg=AOvVaw0gGWi0EjHfy1WId8k8oKAi), and the extension 't' for Transformed Content, as described in [rfc6497](https://www.google.com/url?q=https%3A%2F%2Ftools.ietf.org%2Fhtml%2Frfc6497&sa=D&sntz=1&usg=AOvVaw0w-OUsFX1PtaKYIq31P64I).\r\n>\r\n>The subtags available for use in the 'u' extension provide language tag extensions that provide for additional information needed for identifying locales. The 'u' subtags consist of a set of keys and associated values (types). For example, a locale identifier for British English with numeric collation has the following form: en-GB-u-kn-true\r\n>\r\n>The subtags available for use in the 't' extension provide language tag extensions that provide for additional information needed for identifying transformed content, or a request to transform content in a certain way. For example, the language tag \"ja-Kana-t-it\" can be used as a content tag indicates Japanese Katakana transformed from Italian. It can also be used as a request for a given transformation.\r\n>\r\n>For more details on the valid subtags for these extensions, their syntax, and their meanings, see LDML Section 3.7 [Unicode BCP 47 Extension Data](http://www.google.com/url?q=http%3A%2F%2Fwww.unicode.org%2Freports%2Ftr35%2F%23Locale_Extension_Key_and_Type_Data&sa=D&sntz=1&usg=AOvVaw0lMthb9KbTJtoOd5mvv3Ha).",
"Hi @lbourdois ! Many thanks for the detailed information.\r\n\r\n> Discussions on the need to improve the Hub's tagging system (applying to both datasets and models) can be found in the following discussion: [huggingface/hub-docs#193](https://github.com/huggingface/hub-docs/issues/193) \r\nFascinating topic! To me, the following suggestion has a lot of appeal:\r\n\"if consider that it was necessary to create an ISO 639-3 because ISO 639-1 was deficient, it would be to do the reverse and thus convert the tags from ISO 639-1 to ISO 639-2 or 3 (https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes or https://iso639-3.sil.org/code_tables/639/data).\"\r\n\r\nYes, ISO 639-1 is unsuitable because it has so few codes: less than 200. To address linguistic diversity in 'unrestricted mode', a list of all languages is wanted. \r\n\r\nThe idea of letting people use their favourite nomenclature and automatically adding the ISO 639-3 three-letter code as a tag is appealing. Thus all the HF datasets would have three-letter language tags (handy for basic search), alongside the authors' preferred tags and language names (including Glottolog tags as well as ISO 639-{1, 2}, and all other options allowed by BCP-47). \r\n\r\nRetaining the authors' original tags and language names would be best. \r\n* For language names: some people favour one name over another and it is important to respect their choice. In the case of Yongning Na: alternative names include 'Mosuo', 'Narua', 'Eastern Naxi'... and the names carry implications: people have been reported to come to blows about the use of the term 'Mosuo'. \r\n* For language tags: Glottocodes can be more fine-grained than Ethnologue (ISO 639-3), and some colleagues feel strongly about those. \r\n\r\nThus there would be a BCP-47 tag (sounds like a solid technical choice, though not 'passer-by-friendly': requiring some expertise to interpret) **plus** an ISO 639-3 tag that could be grabbed easily, and (last but not least) language names spelled out in full. Searches would be easier. No information would be lost. \r\n\r\nAre industry practices so conservative that many people are happy with two-letter codes, and consider ISO 639-3 three-letter codes an unnecessary complication? That would be a pity, since there are so many advantages to using longer lists. (Somewhat like the transition to Unicode: sooo much better!) But maybe that conservative attitude _is_ widespread, and it would then need to be taken into account. In which case, one could consider offering two-letter codes as a search option. Internally, the search engine would look up the corresponding 3-letter codes, and produce the search results accordingly. \r\n\r\nNow to the other questions:\r\n\r\n> * Do you think it's possible to easily handle tags that have been deprecated potentially for decades?\r\n> For example (I'm taking the case of Hebrew but this has happened for other languages) I tagged Google models with the \"iw\" [tag](https://huggingface.co/models?language=iw&sort=downloads) because I based it on what the authors gave in their [paper](https://arxiv.org/pdf/2010.11934.pdf) see table 6 page 12). It turns out that this ISO tag has in fact been deprecated since 1989 in favour of the \"he\" tag. It would therefore be necessary to have a verification that transforms the old tags into the most recent ones.\r\nI guess that the above suggestion takes care of this case. The original tag (in this example, \"iw\") is retained (facilitating cross-reference with the published paper, and respecting the real: the way the dataset was originally tagged). This old tag goes into the `BCP-47` field of the dataset, which can handle quirks & oddities like this one. And a new tag is added in the `ISO 639-3` field: the 3-letter code \"heb\". \r\n\r\n> * When you look up a language on Wikipedia, it usually shows, in addition to the ISO standard, the codes in the Glottolog (which you have already mentioned), [ELP](https://www.endangeredlanguages.com/?hl=en) and [Linguasphere](http://www.linguasphere.info/jr/index.php?l1=home&l2=welcome) databases. Would you have any opinion about these two other databases?\r\n\r\nI'm afraid I never heard about Linguasphere. The [online register for Linguasphere (PDF)](http://www.linguasphere.info/jr/pdf/index/LS_index_n-n.pdf) seems to be from 1999-2000. It seems that the level of interoperability is not very high right now. (By contrast, Glottolog has [pyglottolog](https://github.com/glottolog/pyglottolog) and in my experience contacts flow well.) \r\n\r\nThe Endangered Languages Project is something Google started but initially did not 'push' very strongly, it seems. Just airing an opinion on the public Internet, it seems that the project is now solidly rooted at University of Hawaiʻi at Mānoa. It seems that they do not generate codes of their own. They refer to ISO 639-3 (Ethnologue) as a code authority when applicable, and otherwise provide comments in so many words, such as that language L currently lacks an Ethnologue code of its own (example [here](https://www.endangeredlanguages.com/lang/10624)). \r\n\r\n> * On the Hub, there is the following dataset where French people speak in English: https://huggingface.co/datasets/Datatang/French_Speaking_English_Speech_Data_by_Mobile_Phone\r\n> Is there a database to take this case into account? I have not found any code in the Glottolog database. If based on an IETF BCP-47 standard, I would tend to tag the dataset with \"en-fr\" but would this be something accepted by linguists?\r\n> Based on the first post in this thread that there are about 8000 languages, if one considers that a given language can be pronounced by a speaker of the other 7999, that would theoretically make about 64 million BCP-47 language1-language2 codes existing. And even much more if we consider regionalists with language1_regionalism_x-language2_regionalism_y. I guess there is no such database.\r\n\r\nYes, you noted the difficulty here: that there are so many possible situations. Eventually, each dataset would required descriptors of its own. @BenjaminGalliot points out that, in addition to specifying the speakers' native languages, the degree of language proficiency would also be relevant. How many years did the speakers spend in which area? Talking which languages? In what chronological order? Etc. The complexity defies encoding. The purpose of language codes is to allow for searches that group resources into sets that make sense. Additional information is very important, but would seem to be a matter for 'comments' fields. \r\n\r\n> * Is there an international classification of languages?\r\n> A bit like the [International Classification of Diseases](https://en.wikipedia.org/wiki/International_Classification_of_Diseases) in medicine, which is established by the WHO and used as a reference throughout the world. The idea would be to have a precise number of languages to which we would then have to assign a unique tag in order to find them later.\r\n\r\nAs I understand, Ethnologue and Glottolog both try to do that, each in its own way. The simile with diseases seems interesting, to some extent: in both cases it's about human classification of phenomena that have complexity (though some diseases are simpler than others, whereas all languages have much complexity, in different ways).\r\n\r\n> * Finally, when can we expect to see all the datasets of [Pangloss](https://pangloss.cnrs.fr/) on HF? eyes And I don't know if you have a way to help to add also the datasets of [CoCoON](https://cocoon.huma-num.fr/exist/crdo/).\r\n\r\nThree concerns: (i) Technical specifications: we have not yet received feedback on the Japhug and Na datasets in HF. There may be technical considerations that we have not yet thought of and that would need to be taken into account before 'bulk upload'. (ii) Would there be a way to automate the process? The way @BenjaminGalliot did it for Japhug and Na, there was a manual component involved, and doing it by hand for all 200 datasets would not be an ideal workflow, given that the metadata are all clearly arranged. (iii) Some datasets are currently under a 'No derivatives' CreativeCommons license. We could go back to the depositors and argue that the 'No derivatives' mention were best omitted (see [here a similar argument about publications](https://creativecommons.org/2020/04/21/academic-publications-under-no-derivatives-licenses-is-misguided/)): again, we'd want to be sure about the way forward before we set the process into motion.\r\n\r\nOur hope would be that some colleagues try out the [OutilsPangloss](https://gitlab.com/lacito/outilspangloss) download tool, assemble datasets from Pangloss/Cocoon as they wish, then deposit them to HF.",
"> The idea of letting people use their favourite nomenclature and automatically adding the ISO 639-3 three-letter code as a tag is appealing. Thus all the HF datasets would have three-letter language tags (handy for basic search), alongside the authors' preferred tags and language names (including Glottolog tags as well as ISO 639-{1, 2}, and all other options allowed by BCP-47).\r\n> \r\n> Retaining the authors' original tags and language names would be best.\r\n> \r\n> * For language names: some people favour one name over another and it is important to respect their choice. In the case of Yongning Na: alternative names include 'Mosuo', 'Narua', 'Eastern Naxi'... and the names carry implications: people have been reported to come to blows about the use of the term 'Mosuo'.\r\n> * For language tags: Glottocodes can be more fine-grained than Ethnologue (ISO 639-3), and some colleagues feel strongly about those.\r\n> \r\n> Thus there would be a BCP-47 tag (sounds like a solid technical choice, though not 'passer-by-friendly': requiring some expertise to interpret) **plus** an ISO 639-3 tag that could be grabbed easily, and (last but not least) language names spelled out in full. Searches would be easier. No information would be lost.\r\n\r\n@alexis-michaud raises an excellent point. Language Resource users have varying search habits (or approaches). This includes cases where two or more language names refer to a single language. A search utility/interface needs to be flexible and able to present results from various kinds of input in the search process. This could be like how the terms French/Français/Franzosisch (en/fr/de) are names for the same language or it could be a variety of the following: autoglottonyms (how the speakers of the language refer to their language), or exoglottonyms (how others refer to the language). Additionally, in web based searches I have also needed to implement diacritic sensitive and insensitive logic so that users can type with or without diacritics and not have results unnecessarily excluded. \r\n\r\nDepending on how detailed of a search problem HF seeks to solve. It may be better to off load complex search to search engines like OLAC which aggregate a lot of language resources. — as I mentioned above I can assist with the informatics on creating an OLAC feed.\r\n\r\nAbstracting search logic from actual metadata may prove a useful way to lower the technical debt overhead. Technical tools and library standards use ISO and BCP-47 Standards. So, from a bibliographic metadata perspective this seems to be the way forward with the widest set of use cases. ",
"To get a visual idea of these first exchanges, I coded a Streamlit app that I put online on Spaces: https://huggingface.co/spaces/lbourdois/Language-tags-demo. \r\nThe code is in Python so I don't know if it can be used by HF who seems to need something in Node.js but it serves as a proof of concept. The advantage is also that you can directly test ideas by enter things in a search bar and see what comes up. \r\n\r\nThis application is divided into 3 points:\r\n- The first is to enter a language in natural language to get its code which can then be filled in the YAML file of the README.MD files of the HF datasets or models in order to be referenced and found by everyone.\r\nIn practice, enter the language (e.g: `English`) you are interested in to get its associated tag (e.g: `en`). You can enter several languages by separating them with a comma (e.g `French,English,German`). You will be given priority to the ISO 639-3 code if it exists otherwise the Glottocode or the BCP47 code (for varieties in particular). If none of these codes are available, it links to a page where the user can contact HF to request to add this tag. \r\nIf you enter a BCP47 code, it must be entered as follows: `Language(Territory)`, for example `French(Canada)`. Attention! If you enter a BCP-47 language, it must be entered first, otherwise the plant code will be displayed. I have to fix this problem but I am moving to a new place, I don't have an internet connection when I want and I prefer to push this first version so that you can already test things now and not have to wait days or weeks.\r\nThis point is intended to simulate the user's side of the equation, which wonders which tag he should fill in for his language.\r\n\r\n\r\n- The second is to enter a language code to obtain the name of the language in natural language.\r\nIn practice, enter the tag (ISO 639-1/2/3, Glottolog or BCP-47) you are interested in (e.g: `fra`) to get its associated language (e.g: French). You can enter several languages by separating them with a comma (e.g `fra,eng,deu`). Attention! If you enter a BCP-47 code, it must be entered first, otherwise the plant code will be displayed. Same as the other bug above (it's actually the same one).\r\nThis point is intended to simulate the side of HF that for a given tag must return the correct language.\r\n\r\n\r\n\r\nTo code these two points, I tested two approaches. \r\n\r\n1. The first one (internal DB in the app) consists in querying a database that HF would have locally at their place. To create this database, I merged the ISO 639 database (https://iso639-3.sil.org/sites/iso639-3/files/downloads/iso-639-3.tab) and the Glottolog database (https://glottolog.org/meta/downloads). The result of this merge is visible in the 3rd point of the application qui is an overview of the database.\r\nIn the image below, on line 1 of the database, we can see that the Glottocode database gives an ISO 639-3 code (column ISO639P3code) but not the ISO 639 database (column 639-3). Do you have an explanation for this phenomenon?\r\n\r\n\r\n\r\nFor BCP 47 codes of the type `fr-CA`, I have retrieved the ISO-3166 alpha 1 codes of the territories (https://www.iso.org/iso-3166-country-codes.html).\r\nIn practice, what I do is if we enter `fr-CA` is that the letters before the `-` refer to a language in the `Name` column for a `639-1` == `fr` (`639-3` for `fra` or `fre`) in the base of my image above. Then I look at the letters after the `-` which refers to a territory. It comes out `French (Canada)`. I used https://cldr.unicode.org/translation/displaynames/languagelocale-name-patterns for the pattern that came up.\r\n\r\n\r\n2. The second approach (with langcodes lib in the app) consists in using the Python `langcodes` library (https://github.com/rspeer/langcodes) which offers a lot of features in ready-made functions. It manages for example deprecated codes, the validity of an entered code, gives languages from code in the language of your choice (by default in English, but also autoglottonyms), etc. I invite you to read the README of the library. The only negative point is that it hasn't been updated for 10 months so basing your tag system on an external tool that isn't necessarily up to date can cause problems in the long run. But it is certainly an interesting source.\r\n\r\nFinally, I have added some information on the number of people speaking/reading the language(s) searched (figures provided by langcodes which are based on those given by ISO). This is not relevant for our topic but it could be figures that could be added as information on the https://huggingface.co/languages page. \r\n\r\n\r\n\r\nWhat could be done to improve the app if I have time:\r\n- Write the text for the app's homepage to describe what it does. This could serve as a basis for a documentation that I think will be necessary to add somewhere on the HF website to explain how the language tagging system works.\r\n- Deal with the bug mentioned above\r\n- Integrate ISO 3166-1 alpha 2 territories (https://www.iso.org/obp/ui#iso:pub:PUB500001:en)? They offer a finer granularity than ISO 3166-1 alpha 1 which is limited to the country level, but they are very administrative (for French, ISO 3166-1 alpha 2 gives us the \"départements\" for example).\r\n- Add autoglottonyms? (I only handle English language names for the moment)\r\n- For each language indicate to which family it belongs, in practice this could help to make data augmentation, but especially to classify the languages and find them more easily on the page https://huggingface.co/languages.",
"Very impressive! Using the prompt 'Japhug' (a language name), the app finds the intended language:\r\n\r\n\r\nA first question: based on the Glottocode, would it be possible to grab the closest ISO639-3 code? In case there is no match for the exact language variety, one needs to explore the higher-level groupings, level by level. For this language (Japhug), the information provided in the extracted CSV file (`glottolog-languoids-v4.6.csv`) is: \r\n`sino1245/burm1265/naqi1236/qian1263/rgya1241/core1262/jiar1240` \r\nOne need not look further than the first higher-level grouping, [`jiar1240`](https://glottolog.org/resource/languoid/id/jiar1240), to get an ISO639-3 code, namely `jya`.\r\n\r\nThus users searching by language names would get ISO639-3 (often less fine-grained than Glottolog) as a bonus.\r\nIt might be possible to ask the Glottolog team to provide this piece of information as part of an export from their database.",
"> on line 1 of the database, we can see that the Glottocode database gives an ISO 639-3 code (column ISO639P3code) but not the ISO 639 database (column 639-3). Do you have an explanation for this phenomenon?\r\n\r\nThat is because the language name 'Aewa' is not found in the Ethnologue (ISO 639-3) export that you are using. [This export in table form](https://iso639-3.sil.org/sites/iso639-3/files/downloads/iso-639-3.tab) only has one reference name (`Ref_Name`). For the language at issue, it is not 'Aewa' but ['Awishira'](https://www.ethnologue.com/language/ash).\r\n\r\nBy contrast, the language on line 0 of the database is called 'Abinomn' by both Ethnologue and Glottolog, and accordingly, columns `ISO639P3code` and `639-3` both contain the ISO 639-3 code, `bsa`.\r\n \r\nThe full Ethnologue database records alternate names for each language, and I'd bet that 'Aewa' is recorded among alternate names for the 'Ashiwira' language. I can't check because the full Ethnologue database is paywalled. \r\n\r\n\r\n[Glottolog](https://glottolog.org/resource/languoid/id/abis1238) does provide the corresponding ISO 639-3 code for 'Aewa', `ash`, which is an exact match (it refers to the same variety as Glottolog `abis1238`).\r\nIn this specific case, Glottolog provides all the relevant information. I'd say that Glottolog can be trusted for all the codes they provide, including ISO 639-3 codes: they only include them when the match is good. \r\n\r\nSee previous comment about the cases where there is no exact match between Glottolog and ISO 639-3 (suggested workaround: look at a higher-level grouping to get an ISO 639-3 code).",
"I will add these two points to my TODO list.\r\n- Since Glottolog can be trust, I will add a condition to the code that if there is no ISO 639-3 code in the \"official\" database (https://iso639-3.sil.org/sites/iso639-3/files/downloads/iso-639-3.tab), look for it in the \"ISO639P3code\" column of Glottolog.\r\n- For the point of adding the closest ISO 639-3 code for a Glottolog code, what convention should be adopted for the output? Just the ISO 639-3 code, or the ISO 639-3 code - Glottolog code, or the ISO 639-3 code - language name?\r\nTo use the example of `Japhug` , should it be just `jya`, or `jya-japh1234` or `jya-Japhug`?",
"> * Integrate ISO 3166-1 alpha 2 territories (https://www.iso.org/obp/ui#iso:pub:PUB500001:en)? They offer a finer granularity than ISO 3166-1 alpha 1 which is limited to the country level, but they are very administrative (for French, ISO 3166-1 alpha 2 gives us the \"départements\" for example).\r\n\r\nI'm concerned with this sort of exploration. Not because I am against innovation. In fact this is an interesting thought exercise. However, to explore this thought further creates cognitive dissidence between BCP-47 authorized codes and other code sets which are not BP-47 compliant. For that reason, I think adding additional codes is a waste of time both for HF devs and for future users who get a confusing idea about language tagging. ",
"Good job for the application!\r\n\r\n> On the Hub, there is the following dataset where French people speak in English: https://huggingface.co/datasets/Datatang/French_Speaking_English_Speech_Data_by_Mobile_Phone\r\n Is there a database to take this case into account? I have not found any code in the Glottolog database. If based on an IETF BCP-47 standard, I would tend to tag the dataset with \"en-fr\" but would this be something accepted by linguists?\r\n Based on the first post in this thread that there are about 8000 languages, if one considers that a given language can be pronounced by a speaker of the other 7999, that would theoretically make about 64 million BCP-47 language1-language2 codes existing. And even much more if we consider regionalists with language1_regionalism_x-language2_regionalism_y. I guess there is no such database.\r\n\r\n> Yes, you noted the difficulty here: that there are so many possible situations. Eventually, each dataset would required descriptors of its own. @BenjaminGalliot points out that, in addition to specifying the speakers' native languages, the degree of language proficiency would also be relevant. How many years did the speakers spend in which area? Talking which languages? In what chronological order? Etc. The complexity defies encoding. The purpose of language codes is to allow for searches that group resources into sets that make sense. Additional information is very important, but would seem to be a matter for 'comments' fields.\r\n\r\nTo briefly complete what I said on this subject in a private discussion group, there is a lot of (meta)data associated with each element of a corpus (which language level, according to which criteria, knowing that even among native speakers there are differences, some of which may go beyond what seems obvious to us from a linguistic point of view, such as socio-professional category, life history, environment in the broad sense, etc.), which can be placed in ad-hoc columns, or more freely in a comment/note column. And it is the role of the researcher (in this case a linguist, in all likelihood) to do analyses (statistics...) to determine the relevant data, including criteria that may justify separating different languages (in the broad sense), making separate corpora, etc. Putting this information in the language code is in my opinion doing the job in the opposite and wrong direction, as well as bringing other problems, like where to stop in the list of multidimensional criteria to be integrated, so in my opinion, here, the minimum is the best (the important thing is in my opinion to have well-documented data, globally, by sub-corpus or by line)...\r\n\r\n> If you are going to use Glottolog codes use them after an -x- tag in the BCP-47 format to maintain BCP-47 validity.\r\n\r\nYes, for the current corpora, I have written:\r\n\r\n```\r\nlanguage:\r\n- jya\r\n- nru\r\nlanguage_bcp47:\r\n- x-japh1234\r\n- x-yong1288\r\n```\r\n\r\n> * Add autoglottonyms? (I only handle English language names for the moment)\r\n\r\nAutoglossonyms are useful (I use them prior to other glossonyms), but I'm not sure there is an easy way to retrieve them. We can find some of them in the \"Alternative Names\" panel of Glottolog, but even if we have an API to retrieve them easily, their associated language code will often not be the one we are in (hence the need to do several cycles to find one, which might not be the right one...). Maybe this problem needs more investigation...\r\n\r\n> For the point of adding the closest ISO 639-3 code for a Glottolog code, what convention should be adopted for the output? Just the ISO 639-3 code, or the ISO 639-3 code - Glottolog code, or the ISO 639-3 code - language name?\r\nTo use the example of Japhug , should it be just jya, or jya-japh1234 or jya-Japhug?\r\n\r\nI strongly insist not to add **a** language name after the code, it would restart a spiral of problems, notably the choice of the language in question:\r\n* the autoglossonym: in my opinion the best choice, but you have to know it…\r\n* the English name: iniquitous,\r\n* the name in the administratively/politically dominant language of the target language if it is relevant (strictly localized without overlapping, for example): iniquitous and tendentious (and in a way a special case of the previous one)...\r\n* etc.\r\n",
"> To get a visual idea of these first exchanges, I coded a Streamlit app that I put online on Spaces: https://huggingface.co/spaces/lbourdois/Language-tags-demo.\r\n> The code is in Python so I don't know if it can be used by HF who seems to need something in Node.js but it serves as a proof of concept. The advantage is also that you can directly test ideas by enter things in a search bar and see what comes up.\r\n\r\nThis is really great. You're doing a fantastic job. I love watching the creative process evolve. It is exciting. Let me provide some links to some search interfaces for further inspiration. I always find it helpful to know how others have approached a problem when figuring out my approach. I will link to three examples Glottolog, r12a's language sub-tag chooser, and the FLEx project builder wizard. The first two are online, but the last one is in an application which must be downloaded and works only on windows or linux. I have placed some notes on each of the screenshots.\r\n\r\n* **[Glottolog](https://glottolog.org/)** | [Search Query](https://glottolog.org/glottolog?name=en&namequerytype=part&multilingual=on#2/20.9/150.0) \r\n\r\n\r\n\r\n\r\n\r\n* **[r12a language sub-tag chooser](https://r12a.github.io/app-subtags/)** | [Code on github](https://github.com/r12a/app-subtags)\r\n\r\n\r\n\r\n\r\n* **FLEx Language Chooser** | [application page](https://software.sil.org/fieldworks/)\r\n\r\n\r\n",
"> In practice, what I do is if we enter `fr-CA` is that the letters before the `-` refer to a language in the `Name` column for a `639-1` == `fr` (`639-3` for `fra` or `fre`) in the base of my image above. Then I look at the letters after the `-` which refers to a territory. It comes out `French (Canada)`. I used https://cldr.unicode.org/translation/displaynames/languagelocale-name-patterns for the pattern that came up.\r\n\r\nWhat you are doing is looking at the algorithm for Locale generation rather than BCP-47's original documentation. I'm not sure there are difference, there might be. I know that locale IDs generally follow BCP-47 But I think there are some differences such as the use of `_` vs. `-`. ",
"> A first question: based on the Glottocode, would it be possible to grab the closest ISO639-3 code? In case there is no match for the exact language variety, one needs to explore the higher-level groupings, level by level. For this language (Japhug), the information provided in the extracted CSV file (`glottolog-languoids-v4.6.csv`) is: `sino1245/burm1265/naqi1236/qian1263/rgya1241/core1262/jiar1240` One need not look further than the first higher-level grouping, [`jiar1240`](https://glottolog.org/resource/languoid/id/jiar1240), to get an ISO639-3 code, namely `jya`.\r\n> \r\n> Thus users searching by language names would get ISO639-3 (often less fine-grained than Glottolog) as a bonus. It might be possible to ask the Glottolog team to provide this piece of information as part of an export from their database.\r\n\r\nThis is logical, but the fine grained assertions are not the same. That is just because they are in a hierarchical structure today doesn't mean they will be tomorrow. In some cases the glottolog is clearly referring to sub-language variants which will never receive full language status, whereas in other cases glottolog is referencing to unequal entities one or more of which should be a language. Many of the codes in glottolog have no associated documentation indicating what sort of speech variety they are. ",
"@lbourdois \r\n> * Since Glottolog can be trust, I will add a condition to the code that if there is no ISO 639-3 code in the \"official\" database (https://iso639-3.sil.org/sites/iso639-3/files/downloads/iso-639-3.tab), look for it in the \"ISO639P3code\" column of Glottolog.\r\n\r\nI'm confused here... if there is no ISO639-3 code in the official database from the registrar, why would you look for an \"unofficial\" code from someone else? What is the use case here?",
"> For the point of adding the closest ISO 639-3 code for a Glottolog code, what convention should be adopted for the output? Just the ISO 639-3 code, or the ISO 639-3 code - Glottolog code, or the ISO 639-3 code - language name?\r\nTo use the example of Japhug , should it be just jya, or jya-japh1234 or jya-Japhug?\r\n\r\n(answer edited in view of [Benjamin Galliot's comment](https://github.com/huggingface/datasets/issues/4881#issuecomment-1237420600) \r\nEasy part of the answer first: jya-Japhug is out, because, as @BenjaminGalliot pointed out above, mixing language names with language codes will make trouble. For Japhug, `jya-Japhug` looks rather good: the pair looks nice, the one (`jya`) packed together, the other (`Japhug`) good and complete while still pretty compact. But think about languages like 'Yongning Na' or 'Yucatán Maya': a code with a space in the middle, like `nru-Yongning Na`, is really unsightly and unwieldy, not?\r\n\r\nSome [principles for language naming in English](http://hdl.handle.net/10125/24725) have been put forward, with some linguistic arguments, but always supposing that such standardization is desirable, actual standardization of language names in English may well never happen.\r\n\r\nAs for `jya-japh1234`: again, at first sight it seems cute, combining two fierce competitors (Ethnologue and Glottolog) into something that gets the best of both worlds. \r\nBut @HughP has a point: _adding additional codes is a waste of time both for HF devs and for future users who get a confusing idea about language tagging_ Strong wording, for an important comment: better stick with BCP 47. \r\n\r\nSo the solution pointed out by Benjamin, from Frances Gillis-Webber and Sabine Tittel, looks attractive: \r\njya-x-japh1234\r\n\r\nOn the other hand, if the idea for HF Datasets is simply to add the closest ISO 639-3 code for a Glottolog code, maybe it could be provided simply in three letters: providing the 'raw' ISO 639-3 code `jya`. Availability of 'straight' ISO 639-3 codes could save trouble for some users, and those who want more detail could look at the rest of the metadata and general information associated with datasets.",
"The problem seems to have already been raised here: https://drops.dagstuhl.de/opus/volltexte/2019/10368/pdf/OASIcs-LDK-2019-4.pdf\r\n\r\nAn example can be seen here :\r\n\r\n> 3.1.2 The use of privateuse sub-tag\r\nIn light of unambiguous language codes being available for the two Khoisan varieties, we\r\npropose to combine the ISO 639-3 code for the parent language N‖ng, i.e., ‘ngh’, with the\r\nprivateuse sub-tag ‘x-’ and the respective Glottocodes stated above.\r\nThe language tags for N|uu and ‖’Au can then be defined accordingly:\r\nN|uu: ngh-x-nuuu1242\r\n‖’Au: ngh-x-auni1243\r\n\r\nBy the way, while searching for this, I came across this application: https://huggingface.co/spaces/cdleong/langcode-search",
"> > * Since Glottolog can be trust, I will add a condition to the code that if there is no ISO 639-3 code in the \"official\" database (https://iso639-3.sil.org/sites/iso639-3/files/downloads/iso-639-3.tab), look for it in the \"ISO639P3code\" column of Glottolog.\r\n> \r\n> I'm confused here... if there is no ISO639-3 code in the official database from the registrar, why would you look for an \"unofficial\" code from someone else? What is the use case here?\r\n\r\nHi @HughP, I'm happy to clear what confusion may exist here :innocent: Here is the use case. \r\nGuillaume Jacques (@rgyalrong) put together a sizeable corpus of the Japhug language. It is up on HF Datasets ([here](https://huggingface.co/datasets/Lacito/pangloss/viewer/japh1234)) as well as on Zenodo. \r\n\r\nZenodo is an all-purpose repository without adequate domain-specific metadata (\"[métadonnées métier](https://www.cines.fr/archivage/des-expertises/les-metadonnees/metadonnees-metier/)\"), and the deposits in there are not easy to locate. The Zenodo deposit is intended for a highly specific user case: someone reads about the dataset in a paper, goes to the address on Zenodo and grabs the dataset at one go. \r\n\r\nHF Datasets, on the other hand, allows users to look around among corpora. The Japhug corpus needs proper tagging so that HF Datasets users can find out about it. \r\nJaphug has an entry of its own in Glottolog, whereas it lacks an entry of its own in Ethnologue. Hence the practical usefulness of Glottolog. Ethnologue pools together, under the code `jya`, three different languages (Japhug, Tshobdun `tsho1240` and Zbu `zbua1234`). \r\n\r\nI hope that this helps.",
"> By the way, while searching for this, I came across this application: https://huggingface.co/spaces/cdleong/langcode-search\r\n\r\nReally relevant Space, so tagging its author @cdleong, just in case!",
"@cdleong A one-stop shop for language codes: terrific!\r\nHow do you feel about the use of Glottocodes? When searching the language names 'Japhug' and 'Yongning Na' (real examples, related to a HF Datasets deposit & various research projects), the relevant Glottocodes are retrieved, and that is great (and not that easy, notably with the space in the middle of 'Yongning Na'). But this positive result is 'hidden' in the results page. Specifically: \r\n\r\n- for Japhug: when searching by language name ('Japhug'), the result in big print is 'Failure', even though there is an available Glottocode (at bottom).\r\n\r\nWhen searching by Glottocode (japh1234), same outcome: 'Result: failure!' (even though this _is_ the right Glottocode\r\nWhen searching by x-japh1234 (Glottocode encapsulated in BCP 47 syntax), one gets the message \r\n\r\n> ''x-japh1234' parses meaningfully as a language tag according to IANA\"\r\n\r\nbut there is paradoxically no link provided to Glottolog: the 'Glottolog' part of the results page is empty\r\n\r\n\r\n- Yongning Na: the correct code is identified (yong1288) but instead of foregrounding this exact match, the first result that comes up is a completely different language, called 'Yong'. \r\n\r\nTrying to formulate a conclusion (admittedly, this note is not based on intensive testing, it is just feedback on initial contact): from a user perspective, it seems that the tool could make more extensive use of Glottolog. `langcode-search` does a great job querying Glottolog, why not make more extensive use of that information? (including: to arrive at the nearest ISO 639-3 code)"
] | 2022-08-23T20:14:24Z
| 2024-04-22T15:57:28Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
**The problem:**
Language diversity is an important dimension of the diversity of datasets. To find one's way around datasets, being able to search by language name and by standardized codes appears crucial.
Currently the list of language codes is [here](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/resources/languages.json), right? At about 1,500 entries, it is roughly at 1/4th of the world's diversity of extant languages. (Probably less, as the list of 1,418 contains variants that are linguistically very close: 108 varieties of English, for instance.)
Looking forward to ever increasing coverage, how will the list of language names and language codes improve over time?
Enrichment of the custom list by HFT contributors (like [here](https://github.com/huggingface/datasets/pull/4880)) has several issues:
* progress is likely to be slow:

(input required from reviewers, etc.)
* the more contributors, the less consistency can be expected among contributions. No need to elaborate on how much confusion is likely to ensue as datasets accumulate.
* there is no information on which language relates with which: no encoding of the special closeness between the languages of the Northwestern Germanic branch (English+Dutch+German etc.), for instance. Information on phylogenetic closeness can be relevant to run experiments on transfer of technology from one language to its close relatives.
**A solution that seems desirable:**
Connecting to an established database that (i) aims at full coverage of the world's languages and (ii) has information on higher-level groupings, alternative names, etc.
It takes a lot of hard work to do such databases. Two important initiatives are [Ethnologue](https://www.ethnologue.com/) (ISO standard) and [Glottolog](https://glottolog.org/). Both have pros and cons. Glottolog contains references to Ethnologue identifiers, so adopting Glottolog entails getting the advantages of both sets of language codes.
Both seem technically accessible & 'developer-friendly'. Glottolog has a [GitHub repo](https://github.com/glottolog/glottolog). For Ethnologue, harvesting tools have been devised (see [here](https://github.com/lyy1994/ethnologue); I did not try it out).
In case a conversation with linguists seemed in order here, I'd be happy to participate ('pro bono', of course), & to rustle up more colleagues as useful, to help this useful development happen.
With appreciation of HFT,
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4881/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4881/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/4856
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4856/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4856/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4856/events
|
https://github.com/huggingface/datasets/issues/4856
| 1,339,779,957
|
I_kwDODunzps5P22t1
| 4,856
|
file missing when load_dataset with openwebtext on windows
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10361976?v=4",
"events_url": "https://api.github.com/users/xi-loong/events{/privacy}",
"followers_url": "https://api.github.com/users/xi-loong/followers",
"following_url": "https://api.github.com/users/xi-loong/following{/other_user}",
"gists_url": "https://api.github.com/users/xi-loong/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/xi-loong",
"id": 10361976,
"login": "xi-loong",
"node_id": "MDQ6VXNlcjEwMzYxOTc2",
"organizations_url": "https://api.github.com/users/xi-loong/orgs",
"received_events_url": "https://api.github.com/users/xi-loong/received_events",
"repos_url": "https://api.github.com/users/xi-loong/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/xi-loong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xi-loong/subscriptions",
"type": "User",
"url": "https://api.github.com/users/xi-loong",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[
"I have tried to extract ```0015896-b1054262f7da52a0518521e29c8e352c.txt``` from ```17ecf461bfccd469a1fbc264ccb03731f8606eea7b3e2e8b86e13d18040bf5b3/urlsf_subset00-16_data.xz``` with 7-zip\r\nand put the file into cache_path ```F://huggingface/datasets/downloads/extracted/0901d27f43b7e9ac0577da0d0061c8c632ba0b70ecd1b4bfb21562d9b7486faa```\r\nthere is still raise the same error and I find the file was removed from cache_path after I run the run_mlm.py with ```python run_mlm.py --model_type roberta --tokenizer_name roberta-base --dataset_name openwebtext --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --do_train --do_eval --output_dir F:/model/roberta-base```."
] | 2022-08-16T04:04:22Z
| 2023-01-04T03:39:12Z
| 2023-01-04T03:39:12Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
## Describe the bug
0015896-b1054262f7da52a0518521e29c8e352c.txt is missing when I run run_mlm.py with openwebtext. I check the cache_path and can not find 0015896-b1054262f7da52a0518521e29c8e352c.txt. but I can find this file in the 17ecf461bfccd469a1fbc264ccb03731f8606eea7b3e2e8b86e13d18040bf5b3/urlsf_subset00-16_data.xz with 7-zip.
## Steps to reproduce the bug
```sh
python run_mlm.py --model_type roberta --tokenizer_name roberta-base --dataset_name openwebtext --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --do_train --do_eval --output_dir F:/model/roberta-base
```
or
```python
from datasets import load_dataset
load_dataset("openwebtext", None, cache_dir=None, use_auth_token=None)
```
## Expected results
Loading is successful
## Actual results
Traceback (most recent call last):
File "D:\Python\v3.8.5\lib\site-packages\datasets\builder.py", line 704, in download_and_prepare
self._download_and_prepare(
File "D:\Python\v3.8.5\lib\site-packages\datasets\builder.py", line 1227, in _download_and_prepare
super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
File "D:\Python\v3.8.5\lib\site-packages\datasets\builder.py", line 795, in _download_and_prepare
raise OSError(
OSError: Cannot find data file.
Original error:
[Errno 22] Invalid argument: 'F://huggingface/datasets/downloads/extracted/0901d27f43b7e9ac0577da0d0061c8c632ba0b70ecd1b4bfb21562d9b7486faa/0015896-b1054262f7da52a0518521e29c8e352c.txt'
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: windows
- Python version: 3.8.5
- PyArrow version: 9.0.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10361976?v=4",
"events_url": "https://api.github.com/users/xi-loong/events{/privacy}",
"followers_url": "https://api.github.com/users/xi-loong/followers",
"following_url": "https://api.github.com/users/xi-loong/following{/other_user}",
"gists_url": "https://api.github.com/users/xi-loong/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/xi-loong",
"id": 10361976,
"login": "xi-loong",
"node_id": "MDQ6VXNlcjEwMzYxOTc2",
"organizations_url": "https://api.github.com/users/xi-loong/orgs",
"received_events_url": "https://api.github.com/users/xi-loong/received_events",
"repos_url": "https://api.github.com/users/xi-loong/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/xi-loong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xi-loong/subscriptions",
"type": "User",
"url": "https://api.github.com/users/xi-loong",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4856/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4856/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5074
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5074/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5074/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5074/events
|
https://github.com/huggingface/datasets/issues/5074
| 1,397,850,352
|
I_kwDODunzps5TUYDw
| 5,074
|
Replace AssertionErrors with more meaningful errors
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
},
{
"color": "DF8D62",
"default": false,
"description": "",
"id": 4614514401,
"name": "hacktoberfest",
"node_id": "LA_kwDODunzps8AAAABEwvm4Q",
"url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/20004072?v=4",
"events_url": "https://api.github.com/users/galbwe/events{/privacy}",
"followers_url": "https://api.github.com/users/galbwe/followers",
"following_url": "https://api.github.com/users/galbwe/following{/other_user}",
"gists_url": "https://api.github.com/users/galbwe/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/galbwe",
"id": 20004072,
"login": "galbwe",
"node_id": "MDQ6VXNlcjIwMDA0MDcy",
"organizations_url": "https://api.github.com/users/galbwe/orgs",
"received_events_url": "https://api.github.com/users/galbwe/received_events",
"repos_url": "https://api.github.com/users/galbwe/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/galbwe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/galbwe/subscriptions",
"type": "User",
"url": "https://api.github.com/users/galbwe",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/20004072?v=4",
"events_url": "https://api.github.com/users/galbwe/events{/privacy}",
"followers_url": "https://api.github.com/users/galbwe/followers",
"following_url": "https://api.github.com/users/galbwe/following{/other_user}",
"gists_url": "https://api.github.com/users/galbwe/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/galbwe",
"id": 20004072,
"login": "galbwe",
"node_id": "MDQ6VXNlcjIwMDA0MDcy",
"organizations_url": "https://api.github.com/users/galbwe/orgs",
"received_events_url": "https://api.github.com/users/galbwe/received_events",
"repos_url": "https://api.github.com/users/galbwe/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/galbwe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/galbwe/subscriptions",
"type": "User",
"url": "https://api.github.com/users/galbwe",
"user_view_type": "public"
}
] | null |
[
"Hi, can I pick up this issue?",
"#self-assign",
"Looks like the top-level `datasource` directory was removed when https://github.com/huggingface/datasets/pull/4974 was merged, so there are 3 source files to fix."
] | 2022-10-05T14:03:55Z
| 2022-10-07T14:33:11Z
| 2022-10-07T14:33:11Z
|
COLLABORATOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
Replace the AssertionErrors with more meaningful errors such as ValueError, TypeError, etc.
The files with AssertionErrors that need to be replaced:
```
src/datasets/arrow_reader.py
src/datasets/builder.py
src/datasets/utils/version.py
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5074/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5074/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5348
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5348/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5348/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5348/events
|
https://github.com/huggingface/datasets/issues/5348
| 1,486,975,626
|
I_kwDODunzps5YoXKK
| 5,348
|
The data downloaded in the download folder of the cache does not respect `umask`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/55560583?v=4",
"events_url": "https://api.github.com/users/SaulLu/events{/privacy}",
"followers_url": "https://api.github.com/users/SaulLu/followers",
"following_url": "https://api.github.com/users/SaulLu/following{/other_user}",
"gists_url": "https://api.github.com/users/SaulLu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SaulLu",
"id": 55560583,
"login": "SaulLu",
"node_id": "MDQ6VXNlcjU1NTYwNTgz",
"organizations_url": "https://api.github.com/users/SaulLu/orgs",
"received_events_url": "https://api.github.com/users/SaulLu/received_events",
"repos_url": "https://api.github.com/users/SaulLu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SaulLu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SaulLu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SaulLu",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"note, that `datasets` already did some of that umask fixing in the past and also at the hub - the recent work on the hub about the same: https://github.com/huggingface/huggingface_hub/pull/1220\r\n\r\nAlso I noticed that each file has a .json counterpart and the latter always has the correct perms:\r\n\r\n```\r\n-rw------- 1 uue59kq cnw 173M Dec 9 01:37 537596e64721e2ae3d98785b91d30fda0360c196a8224e29658ad629e7303a4d\r\n-rw-rw---- 1 uue59kq cnw 101 Dec 9 01:37 537596e64721e2ae3d98785b91d30fda0360c196a8224e29658ad629e7303a4d.json\r\n```\r\n\r\nso perhaps cheating is possible and syncing the perms between the 2 will do the trick."
] | 2022-12-09T15:46:27Z
| 2022-12-09T17:21:26Z
| null |
CONTRIBUTOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
For a project on a cluster we are several users to share the same cache for the datasets library. And we have a problem with the permissions on the data downloaded in the cache.
Indeed, it seems that the data is downloaded by giving read and write permissions only to the user launching the command (and no permissions to the group). In our case, those permissions don't respect the `umask` of this user, which was `0007`.
Traceback:
```
Using custom data configuration default
Downloading and preparing dataset text_caps/default to /gpfswork/rech/cnw/commun/datasets/HuggingFaceM4___text_caps/default/1.0.0/2b9ad220cd90fcf2bfb454645bc54364711b83d6d39401ffdaf8cc40882e9141...
Downloading data files: 100%|████████████████████| 3/3 [00:00<00:00, 921.62it/s]
---------------------------------------------------------------------------
PermissionError Traceback (most recent call last)
Cell In [3], line 1
----> 1 ds = load_dataset(dataset_name)
File /gpfswork/rech/cnw/commun/conda/lucile-m4_3/lib/python3.8/site-packages/datasets/load.py:1746, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1743 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES
1745 # Download and prepare data
-> 1746 builder_instance.download_and_prepare(
1747 download_config=download_config,
1748 download_mode=download_mode,
1749 ignore_verifications=ignore_verifications,
1750 try_from_hf_gcs=try_from_hf_gcs,
1751 use_auth_token=use_auth_token,
1752 )
1754 # Build dataset for splits
1755 keep_in_memory = (
1756 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
1757 )
File /gpfswork/rech/cnw/commun/conda/lucile-m4_3/lib/python3.8/site-packages/datasets/builder.py:704, in DatasetBuilder.download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
702 logger.warning("HF google storage unreachable. Downloading and preparing it from source")
703 if not downloaded_from_gcs:
--> 704 self._download_and_prepare(
705 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
706 )
707 # Sync info
708 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File /gpfswork/rech/cnw/commun/conda/lucile-m4_3/lib/python3.8/site-packages/datasets/builder.py:1227, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verify_infos)
1226 def _download_and_prepare(self, dl_manager, verify_infos):
-> 1227 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
File /gpfswork/rech/cnw/commun/conda/lucile-m4_3/lib/python3.8/site-packages/datasets/builder.py:771, in DatasetBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
769 split_dict = SplitDict(dataset_name=self.name)
770 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 771 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
773 # Checksums verification
774 if verify_infos and dl_manager.record_checksums:
File /gpfswork/rech/cnw/commun/modules/datasets_modules/datasets/HuggingFaceM4--TextCaps/2b9ad220cd90fcf2bfb454645bc54364711b83d6d39401ffdaf8cc40882e9141/TextCaps.py:125, in TextCapsDataset._split_generators(self, dl_manager)
123 def _split_generators(self, dl_manager):
124 # urls = _URLS[self.config.name] # TODO later
--> 125 data_dir = dl_manager.download_and_extract(_URLS)
126 gen_kwargs = {
127 split_name: {
128 f"{dir_name}_path": Path(data_dir[dir_name][split_name])
(...)
133 for split_name in ["train", "val", "test"]
134 }
136 for split_name in ["train", "val", "test"]:
File /gpfswork/rech/cnw/commun/conda/lucile-m4_3/lib/python3.8/site-packages/datasets/download/download_manager.py:431, in DownloadManager.download_and_extract(self, url_or_urls)
415 def download_and_extract(self, url_or_urls):
416 """Download and extract given url_or_urls.
417
418 Is roughly equivalent to:
(...)
429 extracted_path(s): `str`, extracted paths of given URL(s).
430 """
--> 431 return self.extract(self.download(url_or_urls))
File /gpfswork/rech/cnw/commun/conda/lucile-m4_3/lib/python3.8/site-packages/datasets/download/download_manager.py:324, in DownloadManager.download(self, url_or_urls)
321 self.downloaded_paths.update(dict(zip(url_or_urls.flatten(), downloaded_path_or_paths.flatten())))
323 start_time = datetime.now()
--> 324 self._record_sizes_checksums(url_or_urls, downloaded_path_or_paths)
325 duration = datetime.now() - start_time
326 logger.info(f"Checksum Computation took {duration.total_seconds() // 60} min")
File /gpfswork/rech/cnw/commun/conda/lucile-m4_3/lib/python3.8/site-packages/datasets/download/download_manager.py:229, in DownloadManager._record_sizes_checksums(self, url_or_urls, downloaded_path_or_paths)
226 """Record size/checksum of downloaded files."""
227 for url, path in zip(url_or_urls.flatten(), downloaded_path_or_paths.flatten()):
228 # call str to support PathLike objects
--> 229 self._recorded_sizes_checksums[str(url)] = get_size_checksum_dict(
230 path, record_checksum=self.record_checksums
231 )
File /gpfswork/rech/cnw/commun/conda/lucile-m4_3/lib/python3.8/site-packages/datasets/utils/info_utils.py:82, in get_size_checksum_dict(path, record_checksum)
80 if record_checksum:
81 m = sha256()
---> 82 with open(path, "rb") as f:
83 for chunk in iter(lambda: f.read(1 << 20), b""):
84 m.update(chunk)
PermissionError: [Errno 13] Permission denied: '/gpfswork/rech/cnw/commun/datasets/downloads/1e6aa6d23190c30885194fabb193dce3874d902d7636b66315ee8aaa584e80d6'
```
### Steps to reproduce the bug
I think the following will reproduce the bug.
Given 2 users belonging to the same group with `umask` set to `0007`
- first run with User 1:
```python
from datasets import load_dataset
ds_name = "HuggingFaceM4/VQAv2"
ds = load_dataset(ds_name)
```
- then run with User 2:
```python
from datasets import load_dataset
ds_name = "HuggingFaceM4/TextCaps"
ds = load_dataset(ds_name)
```
### Expected behavior
No `PermissionError`
### Environment info
- `datasets` version: 2.4.0
- Platform: Linux-4.18.0-305.65.1.el8_4.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.13
- PyArrow version: 7.0.0
- Pandas version: 1.4.2
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5348/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5348/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5508
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5508/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5508/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5508/events
|
https://github.com/huggingface/datasets/issues/5508
| 1,573,290,359
|
I_kwDODunzps5dxoF3
| 5,508
|
Saving a dataset after setting format to torch doesn't work, but only if filtering
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/13984157?v=4",
"events_url": "https://api.github.com/users/joebhakim/events{/privacy}",
"followers_url": "https://api.github.com/users/joebhakim/followers",
"following_url": "https://api.github.com/users/joebhakim/following{/other_user}",
"gists_url": "https://api.github.com/users/joebhakim/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/joebhakim",
"id": 13984157,
"login": "joebhakim",
"node_id": "MDQ6VXNlcjEzOTg0MTU3",
"organizations_url": "https://api.github.com/users/joebhakim/orgs",
"received_events_url": "https://api.github.com/users/joebhakim/received_events",
"repos_url": "https://api.github.com/users/joebhakim/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/joebhakim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joebhakim/subscriptions",
"type": "User",
"url": "https://api.github.com/users/joebhakim",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hey, I'm a research engineer working on language modelling wanting to contribute to open source. I was wondering if I could give it a shot?",
"Hi! This issue was fixed in https://github.com/huggingface/datasets/pull/4972, so please install `datasets>=2.5.0` to avoid it."
] | 2023-02-06T21:08:58Z
| 2023-02-09T14:55:26Z
| 2023-02-09T14:55:26Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Saving a dataset after setting format to torch doesn't work, but only if filtering
### Steps to reproduce the bug
```
a = Dataset.from_dict({"b": [1, 2]})
a.set_format('torch')
a.save_to_disk("test_save") # saves successfully
a.filter(None).save_to_disk("test_save_filter") # does not
>> [...] TypeError: Provided `function` which is applied to all elements of table returns a `dict` of types [<class 'torch.Tensor'>]. When using `batched=True`, make sure provided `function` returns a `dict` of types like `(<class 'list'>, <class 'numpy.ndarray'>)`.
# note: skipping the format change to torch lets this work.
### Expected behavior
Saving to work
### Environment info
- `datasets` version: 2.4.0
- Platform: Linux-6.1.9-arch1-1-x86_64-with-glibc2.36
- Python version: 3.10.9
- PyArrow version: 9.0.0
- Pandas version: 1.4.4
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5508/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5508/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/7230
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7230/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7230/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7230/events
|
https://github.com/huggingface/datasets/pull/7230
| 2,589,531,942
|
PR_kwDODunzps5-ttUV
| 7,230
|
Video support
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7230). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2024-10-15T18:17:29Z
| 2024-10-24T16:39:51Z
| 2024-10-24T16:39:50Z
|
MEMBER
| null | null | null |
(wip and experimental)
adding the `Video` type based on `VideoReader` from `decord`
```python
>>>from datasets import load_dataset
>>> ds = load_dataset("path/to/videos", split="train").with_format("torch")
>>> print(ds[0]["video"])
<decord.video_reader.VideoReader object at 0x337a47910>
>>> print(ds[0]["video"][0])
tensor([[[73, 73, 73],
[73, 73, 73],
[73, 73, 73],
...,
[23, 23, 23],
[23, 23, 23],
[23, 23, 23]]], dtype=torch.uint8)
```
the storage is the same as for audio and images: `{"path": pa.string(), "bytes": pa.binary()}` and I did a small to keep the hf:// URL in the "path" field if possible, this way the viewer can link to fiels on the hub if possible
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7230/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7230/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7230.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7230",
"merged_at": "2024-10-24T16:39:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7230.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7230"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5724
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5724/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5724/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5724/events
|
https://github.com/huggingface/datasets/issues/5724
| 1,659,938,135
|
I_kwDODunzps5i8KVX
| 5,724
|
Error after shuffling streaming IterableDatasets with downloaded dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/41177966?v=4",
"events_url": "https://api.github.com/users/szxiangjn/events{/privacy}",
"followers_url": "https://api.github.com/users/szxiangjn/followers",
"following_url": "https://api.github.com/users/szxiangjn/following{/other_user}",
"gists_url": "https://api.github.com/users/szxiangjn/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/szxiangjn",
"id": 41177966,
"login": "szxiangjn",
"node_id": "MDQ6VXNlcjQxMTc3OTY2",
"organizations_url": "https://api.github.com/users/szxiangjn/orgs",
"received_events_url": "https://api.github.com/users/szxiangjn/received_events",
"repos_url": "https://api.github.com/users/szxiangjn/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/szxiangjn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/szxiangjn/subscriptions",
"type": "User",
"url": "https://api.github.com/users/szxiangjn",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Moving `\"en\"` to the end of the path instead of passing it as a config name should fix the error:\r\n```python\r\nimport datasets\r\ndataset = datasets.load_dataset('/path/to/your/data/dir/en', streaming=True, split='train')\r\ndataset = dataset.shuffle(buffer_size=10_000, seed=42)\r\nnext(iter(dataset))\r\n```\r\n\r\nPS: https://github.com/huggingface/datasets/pull/5331, once merged, will allow us to define C4's configs in its README, making downloading it much more user-friendly."
] | 2023-04-09T16:58:44Z
| 2023-04-20T20:37:30Z
| 2023-04-20T20:37:30Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I downloaded the C4 dataset, and used streaming IterableDatasets to read it. Everything went normal until I used `dataset = dataset.shuffle(seed=42, buffer_size=10_000)` to shuffle the dataset. Shuffled dataset will throw the following error when it is used by `next(iter(dataset))`:
```
File "/data/miniconda3/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 937, in __iter__
for key, example in ex_iterable:
File "/data/miniconda3/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 627, in __iter__
for x in self.ex_iterable:
File "/data/miniconda3/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 138, in __iter__
yield from self.generate_examples_fn(**kwargs_with_shuffled_shards)
File "/data/miniconda3/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 763, in wrapper
for key, table in generate_tables_fn(**kwargs):
File "/data/miniconda3/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 101, in _generate_tables
batch = f.read(self.config.chunksize)
File "/data/miniconda3/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 372, in read_with_retries
out = read(*args, **kwargs)
File "/data/miniconda3/lib/python3.9/gzip.py", line 300, in read
return self._buffer.read(size)
File "/data/miniconda3/lib/python3.9/_compression.py", line 68, in readinto
data = self.read(len(byte_view))
File "/data/miniconda3/lib/python3.9/gzip.py", line 487, in read
if not self._read_gzip_header():
File "/data/miniconda3/lib/python3.9/gzip.py", line 435, in _read_gzip_header
raise BadGzipFile('Not a gzipped file (%r)' % magic)
gzip.BadGzipFile: Not a gzipped file (b've')
```
I found that there is no problem to use the dataset in this way without shuffling. Also, use `dataset = datasets.load_dataset('c4', 'en', split='train', streaming=True)`, which will download the dataset on-the-fly instead of loading from the local file, will also not have problems even after shuffle.
### Steps to reproduce the bug
1. Download C4 dataset from https://huggingface.co/datasets/allenai/c4
2.
```
import datasets
dataset = datasets.load_dataset('/path/to/your/data/dir', 'en', streaming=True, split='train')
dataset = dataset.shuffle(buffer_size=10_000, seed=42)
next(iter(dataset))
```
### Expected behavior
`next(iter(dataset))` should give me a sample from the dataset
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-5.4.32-1-tlinux4-0001-x86_64-with-glibc2.28
- Python version: 3.9.16
- Huggingface_hub version: 0.13.1
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/41177966?v=4",
"events_url": "https://api.github.com/users/szxiangjn/events{/privacy}",
"followers_url": "https://api.github.com/users/szxiangjn/followers",
"following_url": "https://api.github.com/users/szxiangjn/following{/other_user}",
"gists_url": "https://api.github.com/users/szxiangjn/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/szxiangjn",
"id": 41177966,
"login": "szxiangjn",
"node_id": "MDQ6VXNlcjQxMTc3OTY2",
"organizations_url": "https://api.github.com/users/szxiangjn/orgs",
"received_events_url": "https://api.github.com/users/szxiangjn/received_events",
"repos_url": "https://api.github.com/users/szxiangjn/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/szxiangjn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/szxiangjn/subscriptions",
"type": "User",
"url": "https://api.github.com/users/szxiangjn",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5724/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5724/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/7500
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7500/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7500/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7500/events
|
https://github.com/huggingface/datasets/issues/7500
| 2,974,841,921
|
I_kwDODunzps6xUHxB
| 7,500
|
Make `with_format` correctly indicate that a `Dataset` is compatible with PyTorch's `Dataset` class
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3817460?v=4",
"events_url": "https://api.github.com/users/benglewis/events{/privacy}",
"followers_url": "https://api.github.com/users/benglewis/followers",
"following_url": "https://api.github.com/users/benglewis/following{/other_user}",
"gists_url": "https://api.github.com/users/benglewis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/benglewis",
"id": 3817460,
"login": "benglewis",
"node_id": "MDQ6VXNlcjM4MTc0NjA=",
"organizations_url": "https://api.github.com/users/benglewis/orgs",
"received_events_url": "https://api.github.com/users/benglewis/received_events",
"repos_url": "https://api.github.com/users/benglewis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/benglewis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/benglewis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/benglewis",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[
"Does the torch `DataLoader` really require the dataset to be a subclass of `torch.utils.data.Dataset` ? Or is there a simpler type we could use ?\n\nPS: also note that a dataset without `with_format()` can also be used in a torch `DataLoader` . Calling `with_format(\"torch\")` simply makes the output of the dataset torch Tensors in an efficient way."
] | 2025-04-06T09:56:09Z
| 2025-04-15T12:57:39Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
Currently `datasets` does not correctly indicate to the Python type-checker (e.g. `pyright` / `Pylance`) that the output of `with_format` is compatible with PyTorch's `Dataloader` since it does not indicate that the HuggingFace `Dataset` is compatible with the PyTorch `Dataset` class. It would be great if we could get the typing to work nicely.
### Motivation
To avoid casting types in our Python code.
### Your contribution
I would be happy to contribute a PR if this is something that may be accepted and could work with the current approach.
This doesn't have to be for just PyTorch, but I imagine that the same thing would be useful for `tensorflow` and such, but we only have a need for PyTorch at this stage.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7500/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7500/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5799
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5799/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5799/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5799/events
|
https://github.com/huggingface/datasets/issues/5799
| 1,686,334,572
|
I_kwDODunzps5kg2xs
| 5,799
|
Files downloaded to cache do not respect umask
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null |
[] | 2023-04-27T08:06:05Z
| 2023-04-27T09:30:17Z
| 2023-04-27T09:30:17Z
|
MEMBER
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
As reported by @stas00, files downloaded to the cache do not respect umask:
```bash
$ ls -l /path/to/cache/datasets/downloads/
-rw------- 1 uername username 150M Apr 25 16:41 5e646c1d600f065adaeb134e536f6f2f296a6d804bd1f0e1fdcd20ee28c185c6
```
Related to:
- #2065
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5799/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5799/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/4611
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4611/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4611/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4611/events
|
https://github.com/huggingface/datasets/pull/4611
| 1,290,940,874
|
PR_kwDODunzps46rxIX
| 4,611
|
Preserve member order by MockDownloadManager.iter_archive
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-07-01T05:48:20Z
| 2022-07-01T16:59:11Z
| 2022-07-01T16:48:28Z
|
MEMBER
| null | null | null |
Currently, `MockDownloadManager.iter_archive` yields paths to archive members in an order given by `path.rglob("*")`, which migh not be the same order as in the original archive.
See issue in:
- https://github.com/huggingface/datasets/pull/4579#issuecomment-1172135027
This PR fixes the order of the members yielded by `MockDownloadManager.iter_archive` so that it is the same as in the original archive.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4611/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4611/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4611.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4611",
"merged_at": "2022-07-01T16:48:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4611.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4611"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6672
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6672/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6672/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6672/events
|
https://github.com/huggingface/datasets/pull/6672
| 2,138,732,288
|
PR_kwDODunzps5nGAlw
| 6,672
|
Remove deprecated verbose parameter from CSV builder
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6672). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"I am merging this PR (so that it is included in the next patch release) to remove the deprecation warning raised by the CSV builder from pandas 2.2.0.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005374 / 0.011353 (-0.005979) | 0.003833 / 0.011008 (-0.007175) | 0.063465 / 0.038508 (0.024957) | 0.029564 / 0.023109 (0.006455) | 0.252759 / 0.275898 (-0.023139) | 0.274726 / 0.323480 (-0.048754) | 0.004014 / 0.007986 (-0.003971) | 0.002754 / 0.004328 (-0.001574) | 0.049351 / 0.004250 (0.045101) | 0.041858 / 0.037052 (0.004806) | 0.269023 / 0.258489 (0.010534) | 0.290670 / 0.293841 (-0.003171) | 0.028435 / 0.128546 (-0.100111) | 0.010988 / 0.075646 (-0.064658) | 0.207447 / 0.419271 (-0.211824) | 0.035945 / 0.043533 (-0.007588) | 0.257336 / 0.255139 (0.002197) | 0.267310 / 0.283200 (-0.015890) | 0.018575 / 0.141683 (-0.123108) | 1.144515 / 1.452155 (-0.307640) | 1.214614 / 1.492716 (-0.278102) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.103527 / 0.018006 (0.085521) | 0.310607 / 0.000490 (0.310117) | 0.000216 / 0.000200 (0.000016) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018597 / 0.037411 (-0.018814) | 0.063176 / 0.014526 (0.048650) | 0.073553 / 0.176557 (-0.103003) | 0.120648 / 0.737135 (-0.616487) | 0.075625 / 0.296338 (-0.220713) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289148 / 0.215209 (0.073939) | 2.798351 / 2.077655 (0.720696) | 1.487909 / 1.504120 (-0.016211) | 1.369945 / 1.541195 (-0.171250) | 1.378889 / 1.468490 (-0.089602) | 0.569825 / 4.584777 (-4.014952) | 2.413309 / 3.745712 (-1.332403) | 2.795668 / 5.269862 (-2.474193) | 1.757748 / 4.565676 (-2.807929) | 0.064686 / 0.424275 (-0.359589) | 0.005027 / 0.007607 (-0.002580) | 0.341835 / 0.226044 (0.115791) | 3.349915 / 2.268929 (1.080987) | 1.864253 / 55.444624 (-53.580371) | 1.595788 / 6.876477 (-5.280688) | 1.666127 / 2.142072 (-0.475945) | 0.665239 / 4.805227 (-4.139989) | 0.120563 / 6.500664 (-6.380101) | 0.043649 / 0.075469 (-0.031820) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.988543 / 1.841788 (-0.853244) | 11.973275 / 8.074308 (3.898967) | 9.685401 / 10.191392 (-0.505991) | 0.141416 / 0.680424 (-0.539008) | 0.014328 / 0.534201 (-0.519873) | 0.287063 / 0.579283 (-0.292220) | 0.266284 / 0.434364 (-0.168080) | 0.324643 / 0.540337 (-0.215694) | 0.423845 / 1.386936 (-0.963091) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005430 / 0.011353 (-0.005923) | 0.003770 / 0.011008 (-0.007239) | 0.050879 / 0.038508 (0.012371) | 0.031929 / 0.023109 (0.008819) | 0.297739 / 0.275898 (0.021841) | 0.319380 / 0.323480 (-0.004100) | 0.004348 / 0.007986 (-0.003637) | 0.002783 / 0.004328 (-0.001545) | 0.050024 / 0.004250 (0.045774) | 0.045209 / 0.037052 (0.008157) | 0.307608 / 0.258489 (0.049119) | 0.338168 / 0.293841 (0.044327) | 0.051712 / 0.128546 (-0.076834) | 0.011092 / 0.075646 (-0.064554) | 0.059830 / 0.419271 (-0.359441) | 0.033894 / 0.043533 (-0.009638) | 0.295278 / 0.255139 (0.040139) | 0.310749 / 0.283200 (0.027550) | 0.018676 / 0.141683 (-0.123007) | 1.201086 / 1.452155 (-0.251069) | 1.258214 / 1.492716 (-0.234502) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094079 / 0.018006 (0.076073) | 0.304657 / 0.000490 (0.304168) | 0.000225 / 0.000200 (0.000026) | 0.000057 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021969 / 0.037411 (-0.015442) | 0.075749 / 0.014526 (0.061223) | 0.087878 / 0.176557 (-0.088679) | 0.126022 / 0.737135 (-0.611114) | 0.089466 / 0.296338 (-0.206873) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286415 / 0.215209 (0.071206) | 2.831867 / 2.077655 (0.754212) | 1.584119 / 1.504120 (0.079999) | 1.468454 / 1.541195 (-0.072740) | 1.495831 / 1.468490 (0.027341) | 0.579569 / 4.584777 (-4.005208) | 2.477248 / 3.745712 (-1.268464) | 2.830536 / 5.269862 (-2.439325) | 1.820188 / 4.565676 (-2.745488) | 0.064408 / 0.424275 (-0.359867) | 0.005156 / 0.007607 (-0.002451) | 0.342391 / 0.226044 (0.116347) | 3.424380 / 2.268929 (1.155452) | 1.993110 / 55.444624 (-53.451514) | 1.702971 / 6.876477 (-5.173506) | 1.844281 / 2.142072 (-0.297792) | 0.668208 / 4.805227 (-4.137020) | 0.120306 / 6.500664 (-6.380358) | 0.042127 / 0.075469 (-0.033342) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.019118 / 1.841788 (-0.822670) | 12.418330 / 8.074308 (4.344022) | 10.474226 / 10.191392 (0.282834) | 0.148510 / 0.680424 (-0.531914) | 0.015107 / 0.534201 (-0.519094) | 0.289488 / 0.579283 (-0.289795) | 0.278149 / 0.434364 (-0.156215) | 0.334655 / 0.540337 (-0.205682) | 0.419127 / 1.386936 (-0.967809) |\n\n</details>\n</details>\n\n\n"
] | 2024-02-16T14:26:21Z
| 2024-02-19T09:26:34Z
| 2024-02-19T09:20:22Z
|
MEMBER
| null | null | null |
Remove deprecated `verbose` parameter from CSV builder.
Note that the `verbose` parameter is deprecated since pandas 2.2.0. See:
- https://github.com/pandas-dev/pandas/pull/56556
- https://github.com/pandas-dev/pandas/pull/57450
Fix #6671.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6672/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6672/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6672.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6672",
"merged_at": "2024-02-19T09:20:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6672.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6672"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7437
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7437/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7437/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7437/events
|
https://github.com/huggingface/datasets/pull/7437
| 2,899,104,679
|
PR_kwDODunzps6Nkhla
| 7,437
|
Use pyupgrade --py39-plus for remaining files
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cyyever",
"id": 17618148,
"login": "cyyever",
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"repos_url": "https://api.github.com/users/cyyever/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cyyever",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-03-06T02:12:25Z
| 2025-04-15T14:47:54Z
| null |
CONTRIBUTOR
| null | null | null |
This work follows #7428. And "requires-python" is set in pyproject.toml
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7437/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7437/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7437.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7437",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7437.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7437"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7071
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7071/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7071/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7071/events
|
https://github.com/huggingface/datasets/issues/7071
| 2,430,313,011
|
I_kwDODunzps6Q26Iz
| 7,071
|
Filter hangs
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/61711045?v=4",
"events_url": "https://api.github.com/users/lucienwalewski/events{/privacy}",
"followers_url": "https://api.github.com/users/lucienwalewski/followers",
"following_url": "https://api.github.com/users/lucienwalewski/following{/other_user}",
"gists_url": "https://api.github.com/users/lucienwalewski/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lucienwalewski",
"id": 61711045,
"login": "lucienwalewski",
"node_id": "MDQ6VXNlcjYxNzExMDQ1",
"organizations_url": "https://api.github.com/users/lucienwalewski/orgs",
"received_events_url": "https://api.github.com/users/lucienwalewski/received_events",
"repos_url": "https://api.github.com/users/lucienwalewski/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lucienwalewski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lucienwalewski/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lucienwalewski",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2024-07-25T15:29:05Z
| 2024-07-25T15:36:59Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
When trying to filter my custom dataset, the process hangs, regardless of the lambda function used. It appears to be an issue with the way the Images are being handled. The dataset in question is a preprocessed version of https://huggingface.co/datasets/danaaubakirova/patfig where notably, I have converted the data to the Parquet format.
### Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset('lcolonn/patfig', split='test')
ds_filtered = ds.filter(lambda row: row['cpc_class'] != 'Y')
```
Eventually I ctrl+C and I obtain this stack trace:
```
>>> ds_filtered = ds.filter(lambda row: row['cpc_class'] != 'Y')
Filter: 0%| | 0/998 [00:00<?, ? examples/s]Filter: 0%| | 0/998 [00:35<?, ? examples/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 567, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/fingerprint.py", line 482, in wrapper
out = func(dataset, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3714, in filter
indices = self.map(
^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 602, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 567, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3161, in map
for rank, done, content in Dataset._map_single(**dataset_kwargs):
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3552, in _map_single
batch = apply_function_on_filtered_inputs(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3421, in apply_function_on_filtered_inputs
processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 6478, in get_indices_from_mask_function
num_examples = len(batch[next(iter(batch.keys()))])
~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/formatting/formatting.py", line 273, in __getitem__
value = self.format(key)
^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/formatting/formatting.py", line 376, in format
return self.formatter.format_column(self.pa_table.select([key]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/formatting/formatting.py", line 443, in format_column
column = self.python_features_decoder.decode_column(column, pa_table.column_names[0])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/formatting/formatting.py", line 219, in decode_column
return self.features.decode_column(column, column_name) if self.features else column
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/features/features.py", line 2008, in decode_column
[decode_nested_example(self[column_name], value) if value is not None else None for value in column]
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/features/features.py", line 2008, in <listcomp>
[decode_nested_example(self[column_name], value) if value is not None else None for value in column]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/features/features.py", line 1351, in decode_nested_example
return schema.decode_example(obj, token_per_repo_id=token_per_repo_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/features/image.py", line 188, in decode_example
image.load() # to avoid "Too many open files" errors
^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/PIL/ImageFile.py", line 293, in load
n, err_code = decoder.decode(b)
^^^^^^^^^^^^^^^^^
KeyboardInterrupt
```
Warning! This can even seem to cause some computers to crash.
### Expected behavior
Should return the filtered dataset
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-6.5.0-41-generic-x86_64-with-glibc2.35
- Python version: 3.11.9
- `huggingface_hub` version: 0.24.0
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7071/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7071/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6824
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6824/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6824/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6824/events
|
https://github.com/huggingface/datasets/issues/6824
| 2,251,076,197
|
I_kwDODunzps6GLLJl
| 6,824
|
Winogrande does not seem to be compatible with datasets version of 1.18.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7878204?v=4",
"events_url": "https://api.github.com/users/spliew/events{/privacy}",
"followers_url": "https://api.github.com/users/spliew/followers",
"following_url": "https://api.github.com/users/spliew/following{/other_user}",
"gists_url": "https://api.github.com/users/spliew/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/spliew",
"id": 7878204,
"login": "spliew",
"node_id": "MDQ6VXNlcjc4NzgyMDQ=",
"organizations_url": "https://api.github.com/users/spliew/orgs",
"received_events_url": "https://api.github.com/users/spliew/received_events",
"repos_url": "https://api.github.com/users/spliew/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/spliew/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/spliew/subscriptions",
"type": "User",
"url": "https://api.github.com/users/spliew",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi ! Do you mean 2.18 ? Can you try to update `fsspec` and `huggingface_hub` ?\r\n\r\n```\r\npip install -U fsspec huggingface_hub\r\n```",
"Yes I meant 2.18, and it works after updating `fsspec` and `huggingface_hub`. Thanks!"
] | 2024-04-18T16:11:04Z
| 2024-04-19T09:53:15Z
| 2024-04-19T09:52:33Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I get the following error when simply running `load_dataset('winogrande','winogrande_xl')`.
I do not have such an issue in the 1.17.0 version.
```Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 2556, in load_dataset
builder_instance = load_dataset_builder(
File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 2265, in load_dataset_builder
builder_instance: DatasetBuilder = builder_cls(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 371, in __init__
self.config, self.config_id = self._create_builder_config(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 620, in _create_builder_config
builder_config._resolve_data_files(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 211, in _resolve_data_files
self.data_files = self.data_files.resolve(base_path, download_config)
File "/usr/local/lib/python3.10/dist-packages/datasets/data_files.py", line 799, in resolve
out[key] = data_files_patterns_list.resolve(base_path, download_config)
File "/usr/local/lib/python3.10/dist-packages/datasets/data_files.py", line 752, in resolve
resolve_pattern(
File "/usr/local/lib/python3.10/dist-packages/datasets/data_files.py", line 393, in resolve_pattern
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to find 'hf://datasets/winogrande@ebf71e3c7b5880d019ecf6099c0b09311b1084f5/winogrande_xl/train/0000.parquet' with any supported extension ['.csv', '.tsv', '.json', '.jsonl', '.parquet', '.geoparquet', '.gpq', '.arrow', '.txt', '.tar', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', '.h5', '.hdf', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns', '.ico', '.im', '.iim', '.tif', '.tiff', '.jfif', '.jpe', '.jpg', '.jpeg', '.mpg', '.mpeg', '.msp', '.pcd', '.pxr', '.pbm', '.pgm', '.ppm', '.pnm', '.psd', '.bw', '.rgb', '.rgba', '.sgi', '.ras', '.tga', '.icb', '.vda', '.vst', '.webp', '.wmf', '.emf', '.xbm', '.xpm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.H5', '.HDF', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', '.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.zip']```
### Steps to reproduce the bug
from datasets import load_dataset
datasets = load_dataset('winogrande','winogrande_xl')
### Expected behavior
```Downloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.06M/2.06M [00:00<00:00, 5.16MB/s]
Downloading data: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 118k/118k [00:00<00:00, 360kB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 85.9k/85.9k [00:00<00:00, 242kB/s]
Generating train split: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 40398/40398 [00:00<00:00, 845491.12 examples/s]
Generating test split: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 1767/1767 [00:00<00:00, 362501.11 examples/s]
Generating validation split: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████| 1267/1267 [00:00<00:00, 318768.11 examples/s]```
### Environment info
datasets version: 1.18.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7878204?v=4",
"events_url": "https://api.github.com/users/spliew/events{/privacy}",
"followers_url": "https://api.github.com/users/spliew/followers",
"following_url": "https://api.github.com/users/spliew/following{/other_user}",
"gists_url": "https://api.github.com/users/spliew/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/spliew",
"id": 7878204,
"login": "spliew",
"node_id": "MDQ6VXNlcjc4NzgyMDQ=",
"organizations_url": "https://api.github.com/users/spliew/orgs",
"received_events_url": "https://api.github.com/users/spliew/received_events",
"repos_url": "https://api.github.com/users/spliew/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/spliew/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/spliew/subscriptions",
"type": "User",
"url": "https://api.github.com/users/spliew",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6824/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6824/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/7146
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7146/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7146/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7146/events
|
https://github.com/huggingface/datasets/pull/7146
| 2,519,820,162
|
PR_kwDODunzps57KqRV
| 7,146
|
Set dev version
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7146). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2024-09-11T13:53:27Z
| 2024-09-12T04:34:08Z
| 2024-09-12T04:34:06Z
|
MEMBER
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7146/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7146/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7146.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7146",
"merged_at": "2024-09-12T04:34:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7146.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7146"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7506
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7506/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7506/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7506/events
|
https://github.com/huggingface/datasets/issues/7506
| 2,981,687,450
|
I_kwDODunzps6xuPCa
| 7,506
|
HfHubHTTPError: 429 Client Error: Too Many Requests for URL when trying to access Fineweb-10BT on 4A100 GPUs using SLURM
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/66202555?v=4",
"events_url": "https://api.github.com/users/calvintanama/events{/privacy}",
"followers_url": "https://api.github.com/users/calvintanama/followers",
"following_url": "https://api.github.com/users/calvintanama/following{/other_user}",
"gists_url": "https://api.github.com/users/calvintanama/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/calvintanama",
"id": 66202555,
"login": "calvintanama",
"node_id": "MDQ6VXNlcjY2MjAyNTU1",
"organizations_url": "https://api.github.com/users/calvintanama/orgs",
"received_events_url": "https://api.github.com/users/calvintanama/received_events",
"repos_url": "https://api.github.com/users/calvintanama/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/calvintanama/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/calvintanama/subscriptions",
"type": "User",
"url": "https://api.github.com/users/calvintanama",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi ! make sure to be logged in with your HF account (e.g. using `huggingface-cli login` or passing `token=` to `load_dataset()`), otherwise you'll get rate limited at one point"
] | 2025-04-09T06:32:04Z
| 2025-04-15T13:04:31Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I am trying to run some finetunings on 4 A100 GPUs using SLURM using axolotl training framework which in turn uses Huggingface's Trainer and Accelerate on [Fineweb-10BT](https://huggingface.co/datasets/HuggingFaceFW/fineweb), but I end up running into 429 Client Error: Too Many Requests for URL error when I call next(dataloader_iter). Funny is, that I can run some test fine tuning (for just 200 training steps) in 1 A100 GPU using SLURM. Is there any rate limiter set for querying dataset? I could run the fine tuning with the same settings (4 A100 GPUs in SLURM) last month.
### Steps to reproduce the bug
You would need a server installed with SLURM
1. Create conda environment
1.1 conda create -n example_env -c conda-forge gxx=11 python=3.10
1.2 conda activate example_env
1.3 pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 --index-url https://download.pytorch.org/whl/cu124
1.4 conda install nvidia/label/cuda-12.4.0::cuda-toolkit
1.5 Download flash_attn-2.7.4.post1+cu12torch2.5cxx11abiFALSE-cp310-cp310-linux_x86_64.whl
1.6 pip3 install packaging
1.7 pip3 install ninja
1.8 pip3 install mlflow
1.9 Clone https://github.com/calvintanama/axolotl.git
1.10 `cd` to `axolotl`
1.11 pip3 install -e '.[deepspeed]'
2. Run the training
2.1. Create a folder called `config_run` in axolotl directory
2.2. Copy `config/phi3_pruned_extra_pretrain_22_29_bottleneck_residual_8_a100_4.yaml` to `config_run`
2.3. Change yaml file in the `config_run` accordingly
2.4. Change directory and conda environment name in `jobs/train_phi3_22_29_bottleneck_residual_8_a100_4_temp.sh`
2.5. `jobs/train_phi3_22_29_bottleneck_residual_8_a100_4_temp.sh`
### Expected behavior
This should not cause any error, but gotten
```
File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/accelerate/data_loader.py", line 552, in __iter__
[rank3]: current_batch = next(dataloader_iter)
[rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 701, in __next__
[rank3]: data = self._next_data()
[rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 757, in _next_data
[rank3]: data = self._dataset_fetcher.fetch(index) # may raise StopIteration
[rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 33, in fetch
[rank3]: data.append(next(self.dataset_iter))
[rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/accelerate/data_loader.py", line 338, in __iter__
[rank3]: for element in self.dataset:
[rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 2266, in __iter__
[rank3]: for key, example in ex_iterable:
[rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 1866, in __iter__
[rank3]: for key, example in self.ex_iterable:
[rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 1084, in __iter__
[rank3]: yield from self._iter()
[rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 1263, in _iter
[rank3]: for key, transformed_example in outputs:
[rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 1258, in <genexpr>
[rank3]: outputs = (
[rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 1244, in iter_outputs
[rank3]: for i, key_example in inputs_iterator:
[rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 1106, in iter_batched_inputs
[rank3]: for key, example in iterator:
[rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 1866, in __iter__
[rank3]: for key, example in self.ex_iterable:
[rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 1535, in __iter__
[rank3]: for x in self.ex_iterable:
[rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 374, in __iter__
[rank3]: for key, pa_table in self.generate_tables_fn(**gen_kwags):
[rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py", line 90, in _generate_tables
[rank3]: if parquet_fragment.row_groups:
[rank3]: File "pyarrow/_dataset_parquet.pyx", line 386, in pyarrow._dataset_parquet.ParquetFileFragment.row_groups.__get__
[rank3]: File "pyarrow/_dataset_parquet.pyx", line 393, in pyarrow._dataset_parquet.ParquetFileFragment.metadata.__get__
[rank3]: File "pyarrow/_dataset_parquet.pyx", line 382, in pyarrow._dataset_parquet.ParquetFileFragment.ensure_complete_metadata
[rank3]: File "pyarrow/error.pxi", line 89, in pyarrow.lib.check_status
[rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 827, in read_with_retries
[rank3]: out = read(*args, **kwargs)
[rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py", line 1013, in read
[rank3]: return super().read(length)
[rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/fsspec/spec.py", line 1941, in read
[rank3]: out = self.cache._fetch(self.loc, self.loc + length)
[rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/fsspec/caching.py", line 234, in _fetch
[rank3]: self.cache = self.fetcher(start, end) # new block replaces old
[rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py", line 976, in _fetch_range
[rank3]: hf_raise_for_status(r)
[rank3]: File "/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/huggingface_hub/utils/_http.py", line 482, in hf_raise_for_status
[rank3]: raise _format(HfHubHTTPError, str(e), response) from e
[rank3]: huggingface_hub.errors.HfHubHTTPError: 429 Client Error: Too Many Requests for url: https://huggingface.co/datasets/HuggingFaceFW/fineweb/resolve/0f039043b23fe1d4eed300b504aa4b4a68f1c7ba/sample/10BT/006_00000.parquet
```
### Environment info
- datasets 3.5.0
- torch 2.5.1
- transformers 4.46.2
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7506/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7506/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/7352
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7352/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7352/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7352/events
|
https://github.com/huggingface/datasets/pull/7352
| 2,767,763,850
|
PR_kwDODunzps6GrBB5
| 7,352
|
fsspec 2024.12.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7352). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-01-03T15:32:25Z
| 2025-01-03T15:34:54Z
| 2025-01-03T15:34:11Z
|
MEMBER
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7352/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7352/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7352.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7352",
"merged_at": "2025-01-03T15:34:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7352.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7352"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6724
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6724/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6724/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6724/events
|
https://github.com/huggingface/datasets/issues/6724
| 2,174,398,227
|
I_kwDODunzps6Bmq8T
| 6,724
|
Dataset with loading script does not work in renamed repos
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/BramVanroy",
"id": 2779410,
"login": "BramVanroy",
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/BramVanroy",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2024-03-07T17:38:38Z
| 2024-03-07T20:06:25Z
| null |
CONTRIBUTOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
My data repository was first called `BramVanroy/hplt-mono-v1-2` but I then renamed to use underscores instead of dashes. However, it seems that `datasets` retrieves the old repo name when it checks whether the repo contains data loading scripts in this line.
https://github.com/huggingface/datasets/blob/6fb6c834f008996c994b0a86c3808d0a33d44525/src/datasets/load.py#L1845
When I print `filename` it returns `hplt-mono-v1-2.py` but the files in the repo are of course `['.gitattributes', 'README.md', 'hplt_mono_v1_2.py']`. So the `filename` is the original reponame instead of the renamed one.
I am not sure if this is a caching issue or not or how I can resolve it.
### Steps to reproduce the bug
```
from datasets import load_dataset
ds = load_dataset(
"BramVanroy/hplt-mono-v1-2",
"ky",
trust_remote_code=True
)
```
### Expected behavior
That the most recent repo name is used when `filename` is generated.
### Environment info
- `datasets` version: 2.16.1
- Platform: Linux-5.14.0-284.25.1.el9_2.x86_64-x86_64-with-glibc2.34
- Python version: 3.10.13
- `huggingface_hub` version: 0.20.2
- PyArrow version: 14.0.1
- Pandas version: 2.1.3
- `fsspec` version: 2023.10.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6724/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6724/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5369
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5369/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5369/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5369/events
|
https://github.com/huggingface/datasets/pull/5369
| 1,500,622,276
|
PR_kwDODunzps5Fqaj-
| 5,369
|
Distributed support
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Alright all the tests are passing - this is ready for review",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.015146 / 0.011353 (0.003793) | 0.006683 / 0.011008 (-0.004326) | 0.125994 / 0.038508 (0.087486) | 0.041345 / 0.023109 (0.018235) | 0.378609 / 0.275898 (0.102711) | 0.483139 / 0.323480 (0.159659) | 0.009669 / 0.007986 (0.001684) | 0.005143 / 0.004328 (0.000814) | 0.092015 / 0.004250 (0.087765) | 0.052728 / 0.037052 (0.015676) | 0.397166 / 0.258489 (0.138677) | 0.465820 / 0.293841 (0.171979) | 0.051025 / 0.128546 (-0.077521) | 0.018451 / 0.075646 (-0.057196) | 0.397311 / 0.419271 (-0.021960) | 0.054842 / 0.043533 (0.011309) | 0.391203 / 0.255139 (0.136064) | 0.412743 / 0.283200 (0.129543) | 0.111356 / 0.141683 (-0.030327) | 1.697526 / 1.452155 (0.245372) | 1.795017 / 1.492716 (0.302301) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.253737 / 0.018006 (0.235731) | 0.583071 / 0.000490 (0.582581) | 0.005958 / 0.000200 (0.005758) | 0.000110 / 0.000054 (0.000056) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030397 / 0.037411 (-0.007014) | 0.112242 / 0.014526 (0.097716) | 0.138807 / 0.176557 (-0.037749) | 0.209820 / 0.737135 (-0.527316) | 0.139530 / 0.296338 (-0.156808) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.574111 / 0.215209 (0.358902) | 5.623713 / 2.077655 (3.546058) | 2.416880 / 1.504120 (0.912760) | 1.951013 / 1.541195 (0.409819) | 2.124565 / 1.468490 (0.656075) | 1.268854 / 4.584777 (-3.315923) | 5.942368 / 3.745712 (2.196656) | 5.413814 / 5.269862 (0.143952) | 2.931638 / 4.565676 (-1.634038) | 0.135070 / 0.424275 (-0.289205) | 0.014290 / 0.007607 (0.006683) | 0.708384 / 0.226044 (0.482340) | 7.487994 / 2.268929 (5.219065) | 3.074210 / 55.444624 (-52.370414) | 2.380583 / 6.876477 (-4.495893) | 2.522298 / 2.142072 (0.380226) | 1.336741 / 4.805227 (-3.468486) | 0.236761 / 6.500664 (-6.263903) | 0.076592 / 0.075469 (0.001123) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.629415 / 1.841788 (-0.212373) | 19.000640 / 8.074308 (10.926332) | 21.474058 / 10.191392 (11.282666) | 0.231227 / 0.680424 (-0.449197) | 0.046213 / 0.534201 (-0.487988) | 0.565703 / 0.579283 (-0.013580) | 0.662956 / 0.434364 (0.228592) | 0.656475 / 0.540337 (0.116137) | 0.762534 / 1.386936 (-0.624402) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010952 / 0.011353 (-0.000400) | 0.006259 / 0.011008 (-0.004749) | 0.132430 / 0.038508 (0.093922) | 0.037920 / 0.023109 (0.014811) | 0.483565 / 0.275898 (0.207667) | 0.528190 / 0.323480 (0.204710) | 0.008116 / 0.007986 (0.000130) | 0.006768 / 0.004328 (0.002440) | 0.100520 / 0.004250 (0.096270) | 0.055208 / 0.037052 (0.018155) | 0.484672 / 0.258489 (0.226183) | 0.556937 / 0.293841 (0.263096) | 0.057938 / 0.128546 (-0.070609) | 0.020821 / 0.075646 (-0.054826) | 0.430735 / 0.419271 (0.011464) | 0.066317 / 0.043533 (0.022785) | 0.496652 / 0.255139 (0.241513) | 0.502004 / 0.283200 (0.218804) | 0.125403 / 0.141683 (-0.016280) | 1.833396 / 1.452155 (0.381241) | 1.974517 / 1.492716 (0.481800) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.269198 / 0.018006 (0.251191) | 0.620314 / 0.000490 (0.619824) | 0.000535 / 0.000200 (0.000335) | 0.000083 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032373 / 0.037411 (-0.005039) | 0.130043 / 0.014526 (0.115517) | 0.146217 / 0.176557 (-0.030339) | 0.200187 / 0.737135 (-0.536948) | 0.152839 / 0.296338 (-0.143499) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.677478 / 0.215209 (0.462268) | 6.678856 / 2.077655 (4.601201) | 3.025870 / 1.504120 (1.521750) | 2.678196 / 1.541195 (1.137001) | 2.740640 / 1.468490 (1.272150) | 1.237163 / 4.584777 (-3.347614) | 5.752621 / 3.745712 (2.006908) | 3.170435 / 5.269862 (-2.099427) | 2.049174 / 4.565676 (-2.516502) | 0.147663 / 0.424275 (-0.276612) | 0.016107 / 0.007607 (0.008500) | 0.849666 / 0.226044 (0.623621) | 8.395212 / 2.268929 (6.126283) | 3.741120 / 55.444624 (-51.703505) | 3.102926 / 6.876477 (-3.773550) | 3.233655 / 2.142072 (1.091583) | 1.520349 / 4.805227 (-3.284878) | 0.267159 / 6.500664 (-6.233505) | 0.083646 / 0.075469 (0.008177) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.640458 / 1.841788 (-0.201330) | 19.043169 / 8.074308 (10.968861) | 22.786126 / 10.191392 (12.594734) | 0.218040 / 0.680424 (-0.462384) | 0.032948 / 0.534201 (-0.501253) | 0.569574 / 0.579283 (-0.009710) | 0.658746 / 0.434364 (0.224382) | 0.650501 / 0.540337 (0.110164) | 0.730588 / 1.386936 (-0.656348) |\n\n</details>\n</details>\n\n\n",
"just added a note :)",
"Hi @lhoestq ,\r\nCan you please throw some light on the following statement\r\n`If the dataset has a number of shards that is a factor of world_size (i.e. if dataset.n_shards % world_size == 0), then the shards are evenly assigned across the nodes, which is the most optimized. Otherwise, each node keeps 1 example out of world_size, skipping the other examples.`\r\n\r\nLet's assume I have 127 parquet files and world_size is 4. I was not able to fully comprehend the above statement\r\nWhat does this statement mean?\r\n`each node keeps 1 example out of world_size, skipping the other examples.`\r\nThank you!",
"If you have 128 parquet files, then `dataset.n_shards % world_size == 0`. In this case each worker can take care of 32 parquet files.\r\n\r\nOn the other hand if you have `dataset.n_shards % world_size != 0` (in your case 127 files), then we can't assign the same number of files to each worker. This is an issue because it may under-utilize your GPU at the end of your training since some workers will take longer to iterate on the dataset than others.\r\n\r\nTherefore in this case, all the workers take care of the 127 parquet files but workers will skip examples to not end up with duplicates. That's what \"each node keeps 1 example out of world_size, skipping the other examples\" means, and in your case it implies:\r\n- rank=0 will read the samples with idx=0, 4, 8 etc.\r\n- rank=1 will read the samples with idx=1, 5, 9 etc.\r\n- rank=2 will read the samples with idx=2, 6, 10 etc.\r\n- rank=3 will read the samples with idx=3, 7, 11 etc.",
"Thanks a lot @lhoestq , this helps!",
"Hi, in the case above, if we use `keep_in_memory=True` for `Dataset`, then we still need to read in n times the dataset if we use DDP on n GPUs (1 node), right? That means we need n times the memory. Is there any way to only load the data once, to save memory?",
"`Dataset` objects are memory mapped from disk so they use almost no RAM (only the current batch)\r\n\r\nAlso they are perfectly sharded using `split_dataset_by_node` so it's going to be read exactly once in total using DDP.\r\nYou can also achieve the same thing using a DistributedSampler in pytorch for DDP instead of using `split_dataset_by_node`.",
"Hi, please correct if I mistake anything: \r\n1. `Dataset` with `keep_in_memory=True` would explicitly pre-load the data into memory, instead of reading from disk via the memory map for every batch. The former way should be faster than the latter.\r\n2. When using DDP, before sending the `Dataset` object into `split_dataset_by_node` or incorporate it with `DistributedSampler`, every process still needs to pre-load the entire data into memory (when `keep_in_memory=True`) and then select the chunked indices from the loaded data. \r\n\r\nGenerally, the dilemma I'm facing is:\r\nSuppose we have a data around 120GB, and we want to use `DistributedLengthGroupedSampler` to optimize batching. When using DDP and `keep_in_memory=True`, every process loads 120GB which is not acceptable. For now, I turned off `keep_in_memory` and try to increase the number of workers for `DataLoader` to get better pipelining. \r\n\r\n**But is it possible to load 120GB once into 4 * A100 (which has around 4*120GB memory) and make each process read from this shared data from memory? Theoretically, maybe it should be faster?** ",
"Feel free to ask your questions on the [forum](https://discuss.huggingface.co/c/datasets/10) if you don't mind, this way the discussions may be useful to other people ;) "
] | 2022-12-16T17:43:47Z
| 2023-07-25T12:00:31Z
| 2023-01-16T13:33:32Z
|
MEMBER
| null | null | null |
To split your dataset across your training nodes, you can use the new [`datasets.distributed.split_dataset_by_node`]:
```python
import os
from datasets.distributed import split_dataset_by_node
ds = split_dataset_by_node(ds, rank=int(os.environ["RANK"]), world_size=int(os.environ["WORLD_SIZE"]))
```
This works for both map-style datasets and iterable datasets.
The dataset is split for the node at rank `rank` in a pool of nodes of size `world_size`.
For map-style datasets:
Each node is assigned a chunk of data, e.g. rank 0 is given the first chunk of the dataset.
For iterable datasets:
If the dataset has a number of shards that is a factor of `world_size` (i.e. if `dataset.n_shards % world_size == 0`),
then the shards are evenly assigned across the nodes, which is the most optimized.
Otherwise, each node keeps 1 example out of `world_size`, skipping the other examples.
This can also be combined with a `torch.utils.data.DataLoader` if you want each node to use multiple workers to load the data.
This also supports shuffling. At each epoch, the iterable dataset shards are reshuffled across all the nodes - you just have to call `iterable_ds.set_epoch(epoch_number)`.
TODO:
- [x] docs for usage in PyTorch
- [x] unit tests
- [x] integration tests with torch.distributed.launch
Related to https://github.com/huggingface/transformers/issues/20770
Close https://github.com/huggingface/datasets/issues/5360
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5369/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5369/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5369.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5369",
"merged_at": "2023-01-16T13:33:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5369.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5369"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5186
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5186/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5186/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5186/events
|
https://github.com/huggingface/datasets/issues/5186
| 1,432,045,011
|
I_kwDODunzps5VW0XT
| 5,186
|
Incorrect error message when Dataset.from_sql fails and sqlalchemy not installed
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nateraw",
"id": 32437151,
"login": "nateraw",
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"repos_url": "https://api.github.com/users/nateraw/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nateraw",
"user_view_type": "public"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
] | null |
[
"Hi! The first `Dataset.from_sql` call also outputs the \"ImportError: Using URI string without sqlalchemy installed.\" message, but you also get \"During handling of the above exception another exception occurred: ...\" after which the ValueError is printed. I agree that this behavior makes it easy to miss the original error. \r\n\r\nI think we can improve this by not throwing the writer's ValueError if the error from a dataset script is already being handled to make debugging easier. @lhoestq @albertvillanova wdyt?",
"Yup ! Alternatively the error can be raised in sql.py before generating the examples ? In `_info` for example",
"yea @lhoestq that would probably be good. The 2nd error is useless if the 1st error is the real reason it failed. "
] | 2022-11-01T20:25:51Z
| 2022-11-15T18:24:39Z
| 2022-11-15T18:24:39Z
|
CONTRIBUTOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
When calling `Dataset.from_sql` (in my case, with sqlite3), it fails with a message ```ValueError: Please pass `features` or at least one example when writing data``` when I don't have `sqlalchemy` installed.
### Steps to reproduce the bug
Make a new sqlite db with `sqlite3` and `pandas` from a remote [URL](https://raw.githubusercontent.com/nytimes/covid-19-data/master/us-states.csv).
```python
import sqlite3
import pandas as pd
from datasets import Dataset
conn = sqlite3.connect('us_covid_data.db')
df = pd.read_csv('https://raw.githubusercontent.com/nytimes/covid-19-data/master/us-states.csv')
df.to_sql('states', conn, if_exists='replace')
```
Then if you try to query this DB like this:
```python
ds = Dataset.from_sql('''SELECT * from states WHERE state=="New York";''', "sqlite:///us_covid_data.db")
```
You run into the error I described above:
```ValueError: Please pass `features` or at least one example when writing data```
However, if you try to pass features, as the error suggests, then you get an error that tells you the underlying problem...
```python
from datasets import Dataset, Features, Value
features = Features({
'date': Value('date32'),
'label': Value('string'),
'fips': Value('int32'),
'cases': Value('int32'),
'deaths': Value('int32')
})
ds = Dataset.from_sql(
'''SELECT * from states WHERE state=="New York";''',
"sqlite:///us_covid_data.db",
features=features
)
```
Which results in the actual underlying error: `ImportError: Using URI string without sqlalchemy installed.`
### Expected behavior
Instead of `ValueError` about needing to pass features, we should provide the actual underlying error about not having SQLAlchemy installed when it isn't found in the environment.
### Environment info
- `datasets` version: 2.6.1
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.10
- PyArrow version: 10.0.0
- Pandas version: 1.2.5
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5186/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5186/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5995
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5995/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5995/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5995/events
|
https://github.com/huggingface/datasets/pull/5995
| 1,777,088,925
|
PR_kwDODunzps5UCvYJ
| 5,995
|
Support returning dataframe in map transform
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009725 / 0.011353 (-0.001628) | 0.006014 / 0.011008 (-0.004994) | 0.136039 / 0.038508 (0.097531) | 0.049685 / 0.023109 (0.026576) | 0.492967 / 0.275898 (0.217068) | 0.553775 / 0.323480 (0.230295) | 0.007421 / 0.007986 (-0.000564) | 0.004686 / 0.004328 (0.000357) | 0.106639 / 0.004250 (0.102389) | 0.073483 / 0.037052 (0.036431) | 0.507194 / 0.258489 (0.248705) | 0.535760 / 0.293841 (0.241919) | 0.049666 / 0.128546 (-0.078880) | 0.014139 / 0.075646 (-0.061507) | 0.435459 / 0.419271 (0.016188) | 0.076026 / 0.043533 (0.032493) | 0.454542 / 0.255139 (0.199403) | 0.512724 / 0.283200 (0.229524) | 0.034969 / 0.141683 (-0.106713) | 1.881048 / 1.452155 (0.428893) | 1.959915 / 1.492716 (0.467199) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.265322 / 0.018006 (0.247316) | 0.573963 / 0.000490 (0.573474) | 0.017493 / 0.000200 (0.017293) | 0.000637 / 0.000054 (0.000582) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028712 / 0.037411 (-0.008699) | 0.149554 / 0.014526 (0.135029) | 0.130013 / 0.176557 (-0.046544) | 0.203408 / 0.737135 (-0.533727) | 0.144778 / 0.296338 (-0.151561) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.664198 / 0.215209 (0.448989) | 6.418054 / 2.077655 (4.340399) | 2.602338 / 1.504120 (1.098219) | 2.212992 / 1.541195 (0.671797) | 2.214309 / 1.468490 (0.745819) | 0.914772 / 4.584777 (-3.670005) | 5.824831 / 3.745712 (2.079119) | 2.865381 / 5.269862 (-2.404481) | 1.906020 / 4.565676 (-2.659657) | 0.106947 / 0.424275 (-0.317328) | 0.013467 / 0.007607 (0.005860) | 0.834556 / 0.226044 (0.608512) | 8.237078 / 2.268929 (5.968150) | 3.380919 / 55.444624 (-52.063705) | 2.656713 / 6.876477 (-4.219764) | 2.834941 / 2.142072 (0.692869) | 1.151241 / 4.805227 (-3.653986) | 0.220860 / 6.500664 (-6.279804) | 0.080781 / 0.075469 (0.005312) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.655128 / 1.841788 (-0.186660) | 18.696108 / 8.074308 (10.621800) | 22.882108 / 10.191392 (12.690716) | 0.236041 / 0.680424 (-0.444383) | 0.031073 / 0.534201 (-0.503128) | 0.525263 / 0.579283 (-0.054021) | 0.632933 / 0.434364 (0.198569) | 0.707228 / 0.540337 (0.166890) | 0.753508 / 1.386936 (-0.633428) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009875 / 0.011353 (-0.001478) | 0.005135 / 0.011008 (-0.005873) | 0.101307 / 0.038508 (0.062799) | 0.044895 / 0.023109 (0.021786) | 0.497824 / 0.275898 (0.221926) | 0.573098 / 0.323480 (0.249618) | 0.006669 / 0.007986 (-0.001317) | 0.004289 / 0.004328 (-0.000039) | 0.105824 / 0.004250 (0.101573) | 0.061002 / 0.037052 (0.023950) | 0.510127 / 0.258489 (0.251638) | 0.581387 / 0.293841 (0.287546) | 0.052843 / 0.128546 (-0.075703) | 0.015506 / 0.075646 (-0.060140) | 0.116057 / 0.419271 (-0.303215) | 0.063444 / 0.043533 (0.019912) | 0.479366 / 0.255139 (0.224227) | 0.518419 / 0.283200 (0.235220) | 0.034876 / 0.141683 (-0.106806) | 2.018446 / 1.452155 (0.566292) | 1.960755 / 1.492716 (0.468039) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.269077 / 0.018006 (0.251070) | 0.606059 / 0.000490 (0.605569) | 0.000488 / 0.000200 (0.000288) | 0.000093 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032465 / 0.037411 (-0.004946) | 0.136517 / 0.014526 (0.121991) | 0.147740 / 0.176557 (-0.028816) | 0.193802 / 0.737135 (-0.543334) | 0.151876 / 0.296338 (-0.144462) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.709866 / 0.215209 (0.494657) | 6.848193 / 2.077655 (4.770538) | 3.310853 / 1.504120 (1.806733) | 2.940813 / 1.541195 (1.399619) | 2.934934 / 1.468490 (1.466444) | 0.927104 / 4.584777 (-3.657673) | 5.921607 / 3.745712 (2.175895) | 4.926558 / 5.269862 (-0.343303) | 2.853269 / 4.565676 (-1.712407) | 0.120278 / 0.424275 (-0.303998) | 0.015468 / 0.007607 (0.007861) | 0.820509 / 0.226044 (0.594464) | 8.263136 / 2.268929 (5.994208) | 3.780214 / 55.444624 (-51.664410) | 3.108482 / 6.876477 (-3.767995) | 3.101544 / 2.142072 (0.959471) | 1.165539 / 4.805227 (-3.639688) | 0.229215 / 6.500664 (-6.271449) | 0.079862 / 0.075469 (0.004393) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.775071 / 1.841788 (-0.066717) | 19.327621 / 8.074308 (11.253313) | 23.057537 / 10.191392 (12.866145) | 0.250649 / 0.680424 (-0.429775) | 0.029767 / 0.534201 (-0.504434) | 0.554774 / 0.579283 (-0.024509) | 0.651919 / 0.434364 (0.217555) | 0.651641 / 0.540337 (0.111304) | 0.762386 / 1.386936 (-0.624550) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005997 / 0.011353 (-0.005356) | 0.003892 / 0.011008 (-0.007116) | 0.098020 / 0.038508 (0.059512) | 0.042584 / 0.023109 (0.019475) | 0.317909 / 0.275898 (0.042011) | 0.395042 / 0.323480 (0.071563) | 0.005358 / 0.007986 (-0.002628) | 0.003266 / 0.004328 (-0.001062) | 0.076698 / 0.004250 (0.072447) | 0.062331 / 0.037052 (0.025279) | 0.334900 / 0.258489 (0.076411) | 0.379355 / 0.293841 (0.085514) | 0.030815 / 0.128546 (-0.097731) | 0.008596 / 0.075646 (-0.067050) | 0.327739 / 0.419271 (-0.091533) | 0.054061 / 0.043533 (0.010528) | 0.311044 / 0.255139 (0.055905) | 0.336705 / 0.283200 (0.053506) | 0.022785 / 0.141683 (-0.118898) | 1.516793 / 1.452155 (0.064639) | 1.590435 / 1.492716 (0.097719) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.289157 / 0.018006 (0.271151) | 0.531074 / 0.000490 (0.530585) | 0.004672 / 0.000200 (0.004472) | 0.000095 / 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026173 / 0.037411 (-0.011238) | 0.105723 / 0.014526 (0.091197) | 0.118010 / 0.176557 (-0.058547) | 0.178062 / 0.737135 (-0.559073) | 0.120059 / 0.296338 (-0.176279) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.410870 / 0.215209 (0.195661) | 4.042183 / 2.077655 (1.964528) | 1.830059 / 1.504120 (0.325939) | 1.638996 / 1.541195 (0.097802) | 1.701368 / 1.468490 (0.232878) | 0.529915 / 4.584777 (-4.054861) | 3.693308 / 3.745712 (-0.052404) | 1.827875 / 5.269862 (-3.441986) | 1.063237 / 4.565676 (-3.502440) | 0.065368 / 0.424275 (-0.358907) | 0.010986 / 0.007607 (0.003379) | 0.509399 / 0.226044 (0.283354) | 5.092739 / 2.268929 (2.823810) | 2.293490 / 55.444624 (-53.151135) | 1.958742 / 6.876477 (-4.917735) | 2.024985 / 2.142072 (-0.117088) | 0.646978 / 4.805227 (-4.158249) | 0.138616 / 6.500664 (-6.362048) | 0.062101 / 0.075469 (-0.013368) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.202016 / 1.841788 (-0.639772) | 14.493204 / 8.074308 (6.418896) | 12.992160 / 10.191392 (2.800768) | 0.188922 / 0.680424 (-0.491502) | 0.017594 / 0.534201 (-0.516606) | 0.399917 / 0.579283 (-0.179367) | 0.429760 / 0.434364 (-0.004604) | 0.497906 / 0.540337 (-0.042431) | 0.608745 / 1.386936 (-0.778191) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006164 / 0.011353 (-0.005189) | 0.003980 / 0.011008 (-0.007028) | 0.074676 / 0.038508 (0.036168) | 0.041337 / 0.023109 (0.018228) | 0.400981 / 0.275898 (0.125083) | 0.448791 / 0.323480 (0.125312) | 0.004063 / 0.007986 (-0.003923) | 0.004443 / 0.004328 (0.000114) | 0.075011 / 0.004250 (0.070760) | 0.056494 / 0.037052 (0.019441) | 0.402054 / 0.258489 (0.143565) | 0.446122 / 0.293841 (0.152281) | 0.031752 / 0.128546 (-0.096794) | 0.008835 / 0.075646 (-0.066811) | 0.081226 / 0.419271 (-0.338046) | 0.051501 / 0.043533 (0.007969) | 0.383674 / 0.255139 (0.128535) | 0.405524 / 0.283200 (0.122325) | 0.025929 / 0.141683 (-0.115754) | 1.492985 / 1.452155 (0.040830) | 1.541601 / 1.492716 (0.048885) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.305149 / 0.018006 (0.287142) | 0.497259 / 0.000490 (0.496770) | 0.000420 / 0.000200 (0.000220) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027933 / 0.037411 (-0.009479) | 0.111900 / 0.014526 (0.097374) | 0.124879 / 0.176557 (-0.051678) | 0.178952 / 0.737135 (-0.558184) | 0.127698 / 0.296338 (-0.168640) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.448525 / 0.215209 (0.233316) | 4.486791 / 2.077655 (2.409137) | 2.256687 / 1.504120 (0.752567) | 2.061078 / 1.541195 (0.519884) | 2.078924 / 1.468490 (0.610434) | 0.534412 / 4.584777 (-4.050365) | 3.721098 / 3.745712 (-0.024614) | 1.818735 / 5.269862 (-3.451127) | 1.104198 / 4.565676 (-3.461479) | 0.066277 / 0.424275 (-0.357998) | 0.011441 / 0.007607 (0.003834) | 0.550140 / 0.226044 (0.324095) | 5.498079 / 2.268929 (3.229150) | 2.717398 / 55.444624 (-52.727227) | 2.410194 / 6.876477 (-4.466283) | 2.405304 / 2.142072 (0.263231) | 0.665432 / 4.805227 (-4.139796) | 0.141488 / 6.500664 (-6.359177) | 0.064051 / 0.075469 (-0.011419) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.272334 / 1.841788 (-0.569454) | 14.901608 / 8.074308 (6.827300) | 14.287857 / 10.191392 (4.096465) | 0.165337 / 0.680424 (-0.515086) | 0.017402 / 0.534201 (-0.516799) | 0.398120 / 0.579283 (-0.181163) | 0.416539 / 0.434364 (-0.017825) | 0.463890 / 0.540337 (-0.076447) | 0.567909 / 1.386936 (-0.819027) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009434 / 0.011353 (-0.001919) | 0.005567 / 0.011008 (-0.005441) | 0.122652 / 0.038508 (0.084144) | 0.050177 / 0.023109 (0.027067) | 0.384292 / 0.275898 (0.108394) | 0.446608 / 0.323480 (0.123128) | 0.006502 / 0.007986 (-0.001484) | 0.004523 / 0.004328 (0.000194) | 0.100581 / 0.004250 (0.096331) | 0.073615 / 0.037052 (0.036563) | 0.420179 / 0.258489 (0.161690) | 0.474631 / 0.293841 (0.180790) | 0.047942 / 0.128546 (-0.080604) | 0.013864 / 0.075646 (-0.061783) | 0.419384 / 0.419271 (0.000112) | 0.088317 / 0.043533 (0.044784) | 0.379620 / 0.255139 (0.124481) | 0.412639 / 0.283200 (0.129440) | 0.048947 / 0.141683 (-0.092736) | 1.823498 / 1.452155 (0.371343) | 1.966629 / 1.492716 (0.473913) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.300669 / 0.018006 (0.282663) | 0.593499 / 0.000490 (0.593009) | 0.007247 / 0.000200 (0.007047) | 0.000114 / 0.000054 (0.000059) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030556 / 0.037411 (-0.006856) | 0.119252 / 0.014526 (0.104726) | 0.131403 / 0.176557 (-0.045153) | 0.201845 / 0.737135 (-0.535291) | 0.139350 / 0.296338 (-0.156989) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.652400 / 0.215209 (0.437191) | 6.536540 / 2.077655 (4.458886) | 2.644565 / 1.504120 (1.140445) | 2.245181 / 1.541195 (0.703986) | 2.316030 / 1.468490 (0.847540) | 0.922535 / 4.584777 (-3.662242) | 5.469065 / 3.745712 (1.723353) | 2.800489 / 5.269862 (-2.469373) | 1.749042 / 4.565676 (-2.816635) | 0.108444 / 0.424275 (-0.315831) | 0.015651 / 0.007607 (0.008044) | 0.846085 / 0.226044 (0.620041) | 8.018460 / 2.268929 (5.749531) | 3.338710 / 55.444624 (-52.105914) | 2.675998 / 6.876477 (-4.200479) | 2.918550 / 2.142072 (0.776478) | 1.135145 / 4.805227 (-3.670082) | 0.215165 / 6.500664 (-6.285499) | 0.082066 / 0.075469 (0.006597) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.561661 / 1.841788 (-0.280127) | 18.519035 / 8.074308 (10.444727) | 19.046300 / 10.191392 (8.854908) | 0.236890 / 0.680424 (-0.443534) | 0.027681 / 0.534201 (-0.506520) | 0.511998 / 0.579283 (-0.067285) | 0.591627 / 0.434364 (0.157264) | 0.562021 / 0.540337 (0.021683) | 0.679354 / 1.386936 (-0.707582) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009643 / 0.011353 (-0.001710) | 0.005768 / 0.011008 (-0.005241) | 0.104430 / 0.038508 (0.065922) | 0.050044 / 0.023109 (0.026935) | 0.464117 / 0.275898 (0.188219) | 0.518439 / 0.323480 (0.194959) | 0.006935 / 0.007986 (-0.001051) | 0.004316 / 0.004328 (-0.000013) | 0.094330 / 0.004250 (0.090080) | 0.071451 / 0.037052 (0.034399) | 0.492248 / 0.258489 (0.233759) | 0.555740 / 0.293841 (0.261899) | 0.047836 / 0.128546 (-0.080711) | 0.014788 / 0.075646 (-0.060859) | 0.107590 / 0.419271 (-0.311682) | 0.064396 / 0.043533 (0.020863) | 0.451529 / 0.255139 (0.196390) | 0.475025 / 0.283200 (0.191826) | 0.040006 / 0.141683 (-0.101677) | 1.797107 / 1.452155 (0.344953) | 1.879261 / 1.492716 (0.386545) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.298458 / 0.018006 (0.280451) | 0.613022 / 0.000490 (0.612532) | 0.003582 / 0.000200 (0.003382) | 0.000106 / 0.000054 (0.000052) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030179 / 0.037411 (-0.007232) | 0.123286 / 0.014526 (0.108760) | 0.132070 / 0.176557 (-0.044486) | 0.190883 / 0.737135 (-0.546252) | 0.138526 / 0.296338 (-0.157812) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.666908 / 0.215209 (0.451699) | 6.489035 / 2.077655 (4.411381) | 2.897027 / 1.504120 (1.392907) | 2.565150 / 1.541195 (1.023956) | 2.504827 / 1.468490 (1.036336) | 0.916112 / 4.584777 (-3.668665) | 5.651751 / 3.745712 (1.906039) | 2.743382 / 5.269862 (-2.526479) | 1.773338 / 4.565676 (-2.792338) | 0.128764 / 0.424275 (-0.295511) | 0.013140 / 0.007607 (0.005533) | 0.803281 / 0.226044 (0.577236) | 8.258874 / 2.268929 (5.989945) | 3.633260 / 55.444624 (-51.811364) | 2.878827 / 6.876477 (-3.997649) | 2.977178 / 2.142072 (0.835106) | 1.130467 / 4.805227 (-3.674760) | 0.226381 / 6.500664 (-6.274283) | 0.081550 / 0.075469 (0.006081) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.842927 / 1.841788 (0.001139) | 18.411520 / 8.074308 (10.337212) | 21.118228 / 10.191392 (10.926836) | 0.231526 / 0.680424 (-0.448898) | 0.029300 / 0.534201 (-0.504901) | 0.527450 / 0.579283 (-0.051834) | 0.618873 / 0.434364 (0.184509) | 0.593314 / 0.540337 (0.052976) | 0.734430 / 1.386936 (-0.652506) |\n\n</details>\n</details>\n\n\n"
] | 2023-06-27T14:15:08Z
| 2023-06-28T13:56:02Z
| 2023-06-28T13:46:33Z
|
COLLABORATOR
| null | null | null |
Allow returning Pandas DataFrames in `map` transforms.
(Plus, raise an error in the non-batched mode if a returned PyArrow table/Pandas DataFrame has more than one row)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5995/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5995/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5995.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5995",
"merged_at": "2023-06-28T13:46:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5995.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5995"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7037
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7037/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7037/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7037/events
|
https://github.com/huggingface/datasets/issues/7037
| 2,400,192,419
|
I_kwDODunzps6PEAej
| 7,037
|
A bug of Dataset.to_json() function
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26499566?v=4",
"events_url": "https://api.github.com/users/LinglingGreat/events{/privacy}",
"followers_url": "https://api.github.com/users/LinglingGreat/followers",
"following_url": "https://api.github.com/users/LinglingGreat/following{/other_user}",
"gists_url": "https://api.github.com/users/LinglingGreat/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/LinglingGreat",
"id": 26499566,
"login": "LinglingGreat",
"node_id": "MDQ6VXNlcjI2NDk5NTY2",
"organizations_url": "https://api.github.com/users/LinglingGreat/orgs",
"received_events_url": "https://api.github.com/users/LinglingGreat/received_events",
"repos_url": "https://api.github.com/users/LinglingGreat/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/LinglingGreat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LinglingGreat/subscriptions",
"type": "User",
"url": "https://api.github.com/users/LinglingGreat",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
open
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null |
[
"Thanks for reporting, @LinglingGreat.\r\n\r\nI confirm this is a bug.",
"@albertvillanova I would like to take a shot at this if you aren't working on it currently. Let me know!"
] | 2024-07-10T09:11:22Z
| 2024-09-22T13:16:07Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
When using the Dataset.to_json() function, an unexpected error occurs if the parameter is set to lines=False. The stored data should be in the form of a list, but it actually turns into multiple lists, which causes an error when reading the data again.
The reason is that to_json() writes to the file in several segments based on the batch size. This is not a problem when lines=True, but it is incorrect when lines=False, because writing in several times will produce multiple lists(when len(dataset) > batch_size).
### Steps to reproduce the bug
try this code:
```python
from datasets import load_dataset
import json
train_dataset = load_dataset("Anthropic/hh-rlhf", data_dir="harmless-base")["train"]
output_path = "./harmless-base_hftojs.json"
print(len(train_dataset))
train_dataset.to_json(output_path, lines=False, force_ascii=False, indent=2)
with open(output_path, encoding="utf-8") as f:
data = json.loads(f.read())
```
it raise error: json.decoder.JSONDecodeError: Extra data: line 4003 column 1 (char 1373709)
Extra square brackets have appeared here:
<img width="265" alt="image" src="https://github.com/huggingface/datasets/assets/26499566/81492332-386d-42e8-88d1-b6d4ae3682cc">
### Expected behavior
The code runs normally.
### Environment info
datasets=2.20.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7037/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7037/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5575
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5575/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5575/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5575/events
|
https://github.com/huggingface/datasets/issues/5575
| 1,598,396,552
|
I_kwDODunzps5fRZiI
| 5,575
|
Metadata for each column
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/11356471?v=4",
"events_url": "https://api.github.com/users/parsa-ra/events{/privacy}",
"followers_url": "https://api.github.com/users/parsa-ra/followers",
"following_url": "https://api.github.com/users/parsa-ra/following{/other_user}",
"gists_url": "https://api.github.com/users/parsa-ra/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/parsa-ra",
"id": 11356471,
"login": "parsa-ra",
"node_id": "MDQ6VXNlcjExMzU2NDcx",
"organizations_url": "https://api.github.com/users/parsa-ra/orgs",
"received_events_url": "https://api.github.com/users/parsa-ra/received_events",
"repos_url": "https://api.github.com/users/parsa-ra/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/parsa-ra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/parsa-ra/subscriptions",
"type": "User",
"url": "https://api.github.com/users/parsa-ra",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] |
{
"closed_at": null,
"closed_issues": 5,
"created_at": "2023-02-13T16:22:42Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
},
"description": "Next major release",
"due_on": null,
"html_url": "https://github.com/huggingface/datasets/milestone/10",
"id": 9038583,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/10/labels",
"node_id": "MI_kwDODunzps4Aier3",
"number": 10,
"open_issues": 3,
"state": "open",
"title": "3.0",
"updated_at": "2024-08-21T09:35:06Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/10"
}
|
[
"Hi! Indeed it would be useful to support this. PyArrow natively supports schema-level and column-level metadata, so implementing this should be straightforward. The API I have in mind would work as follows:\r\n```python\r\ncol_feature = Value(\"string\", metadata=\"Some column-level metadata\")\r\n\r\nfeatures = Features({\"col\": col_feature}, metadata=\"Some schema-level metadata\")\r\n```\r\n\r\nWDYT?",
"Sorry for the late reply, \r\nYes, I think this is the most straight-forward approach with the things that we already have.\r\n\r\n",
"@mariosasko Let me know how I can help.",
"Hi, is this feature to be implemented in the near future? It would be really nice if that would be the case! ",
"Hi, I also need this feature for tell my customer if any of the feature is encrypted with a certain key. "
] | 2023-02-24T10:53:44Z
| 2024-01-05T21:48:35Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
Being able to put some metadata for each column as a string or any other type.
### Motivation
I will bring the motivation by an example, lets say we are experimenting with embedding produced by some image encoder network, and we want to iterate through a couple of preprocessing and see which one works better in our downstream task, here as workaround right now what I do is the compute the hash of the preprocessing that the images went through as part of the new columns name, it would be nice to attach some kinda meta data in these scenarios to the each columns. metadata
### Your contribution
Maybe we could map another relational like database as the metadata?
| null |
{
"+1": 10,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 10,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5575/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5575/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5073
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5073/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5073/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5073/events
|
https://github.com/huggingface/datasets/pull/5073
| 1,397,832,183
|
PR_kwDODunzps5AN3Gn
| 5,073
|
Restore saved format state in `load_from_disk`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/74454835?v=4",
"events_url": "https://api.github.com/users/asofiaoliveira/events{/privacy}",
"followers_url": "https://api.github.com/users/asofiaoliveira/followers",
"following_url": "https://api.github.com/users/asofiaoliveira/following{/other_user}",
"gists_url": "https://api.github.com/users/asofiaoliveira/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/asofiaoliveira",
"id": 74454835,
"login": "asofiaoliveira",
"node_id": "MDQ6VXNlcjc0NDU0ODM1",
"organizations_url": "https://api.github.com/users/asofiaoliveira/orgs",
"received_events_url": "https://api.github.com/users/asofiaoliveira/received_events",
"repos_url": "https://api.github.com/users/asofiaoliveira/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/asofiaoliveira/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asofiaoliveira/subscriptions",
"type": "User",
"url": "https://api.github.com/users/asofiaoliveira",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-05T13:51:47Z
| 2022-10-11T16:55:07Z
| 2022-10-11T16:49:23Z
|
CONTRIBUTOR
| null | null | null |
Hello! @mariosasko
This pull request relates to issue #5050 and intends to add the format to datasets loaded from disk.
All I did was add a set_format in the Dataset.load_from_disk, as DatasetDict.load_from_disk relies on the first.
I don't know if I should add a test and where, so let me know if I should and I can work on that as well!
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5073/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5073/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5073.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5073",
"merged_at": "2022-10-11T16:49:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5073.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5073"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5129
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5129/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5129/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5129/events
|
https://github.com/huggingface/datasets/issues/5129
| 1,413,031,664
|
I_kwDODunzps5UOSbw
| 5,129
|
unexpected `cast` or `class_encode_column` result after `rename_column`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/35144675?v=4",
"events_url": "https://api.github.com/users/quaeast/events{/privacy}",
"followers_url": "https://api.github.com/users/quaeast/followers",
"following_url": "https://api.github.com/users/quaeast/following{/other_user}",
"gists_url": "https://api.github.com/users/quaeast/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/quaeast",
"id": 35144675,
"login": "quaeast",
"node_id": "MDQ6VXNlcjM1MTQ0Njc1",
"organizations_url": "https://api.github.com/users/quaeast/orgs",
"received_events_url": "https://api.github.com/users/quaeast/received_events",
"repos_url": "https://api.github.com/users/quaeast/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/quaeast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/quaeast/subscriptions",
"type": "User",
"url": "https://api.github.com/users/quaeast",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[
"Hi! Unfortunately, I can't reproduce this issue locally (in Python 3.7/3.10) or in Colab. I would assume this is due to a bug we fixed in the latest release, but your version is up-to-date, so I'm not sure if there is something we can do to help...",
"Hi, 方子东. I tried running the code with exact the same configuration (both datasets 2.5.2 and 2.6.1, python, pyarrow, pandas), but on Linux. The results seem to be the expected `{<pyarrow.Int64Scalar: 4>, <pyarrow.Int64Scalar: 2>, <pyarrow.Int64Scalar: 3>, <pyarrow.Int64Scalar: 0>, <pyarrow.Int64Scalar: 1>}`.\r\nI don't have a Mac device. I can't verify whether this is a M1 chip-specific problem.",
"I've just tested the code on my M1 Mac, and it behaves as expected.",
"> Hi! Unfortunately, I can't reproduce this issue locally (in Python 3.7/3.10) or in Colab. I would assume this is due to a bug we fixed in the latest release, but your version is up-to-date, so I'm not sure if there is something we can do to help...\r\n\r\nThank you for your attention and feel sorry to take your time. Since this is a bug of old version, I think mybe my problem is because `cast` operation directaly used cached data generated by older verion of `datasets`. I tried to deleted the cached data and I got expected result.\r\n"
] | 2022-10-18T11:15:24Z
| 2022-10-19T03:02:26Z
| 2022-10-19T03:02:26Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
## Describe the bug
When invoke `cast` or `class_encode_column` to a colunm renamed by `rename_column` , it will convert all the variables in this column into one variable. I also run this script in version 2.5.2, this bug does not appear. So I switched to the older version.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("amazon_reviews_multi", "en")
data = dataset['train']
data = data.remove_columns(
[
"review_id",
"product_id",
"reviewer_id",
"review_title",
"language",
"product_category",
]
)
data = data.rename_column("review_body", "text")
data1 = data.class_encode_column("stars")
print(set(data1.data.columns[0]))
# output: {<pyarrow.Int64Scalar: 4>, <pyarrow.Int64Scalar: 2>, <pyarrow.Int64Scalar: 3>, <pyarrow.Int64Scalar: 0>, <pyarrow.Int64Scalar: 1>}
data = data.rename_column("stars", "label")
print(set(data.data.columns[0]))
# output: {<pyarrow.Int32Scalar: 5>, <pyarrow.Int32Scalar: 4>, <pyarrow.Int32Scalar: 1>, <pyarrow.Int32Scalar: 3>, <pyarrow.Int32Scalar: 2>}
data2 = data.class_encode_column("label")
print(set(data2.data.columns[0]))
# output: {<pyarrow.Int64Scalar: 0>}
```
## Expected results
the last print should be:
{<pyarrow.Int64Scalar: 4>, <pyarrow.Int64Scalar: 2>, <pyarrow.Int64Scalar: 3>, <pyarrow.Int64Scalar: 0>, <pyarrow.Int64Scalar: 1>}
## Actual results
but it output:
{<pyarrow.Int64Scalar: 0>}
## Environment info
- `datasets` version: 2.6.1
- Platform: macOS-12.5.1-arm64-arm-64bit
- Python version: 3.10.6
- PyArrow version: 9.0.0
- Pandas version: 1.5.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/35144675?v=4",
"events_url": "https://api.github.com/users/quaeast/events{/privacy}",
"followers_url": "https://api.github.com/users/quaeast/followers",
"following_url": "https://api.github.com/users/quaeast/following{/other_user}",
"gists_url": "https://api.github.com/users/quaeast/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/quaeast",
"id": 35144675,
"login": "quaeast",
"node_id": "MDQ6VXNlcjM1MTQ0Njc1",
"organizations_url": "https://api.github.com/users/quaeast/orgs",
"received_events_url": "https://api.github.com/users/quaeast/received_events",
"repos_url": "https://api.github.com/users/quaeast/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/quaeast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/quaeast/subscriptions",
"type": "User",
"url": "https://api.github.com/users/quaeast",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5129/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5129/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/4807
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4807/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4807/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4807/events
|
https://github.com/huggingface/datasets/pull/4807
| 1,332,784,110
|
PR_kwDODunzps483MSH
| 4,807
|
document fix in opus_gnome dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/38291975?v=4",
"events_url": "https://api.github.com/users/gojiteji/events{/privacy}",
"followers_url": "https://api.github.com/users/gojiteji/followers",
"following_url": "https://api.github.com/users/gojiteji/following{/other_user}",
"gists_url": "https://api.github.com/users/gojiteji/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gojiteji",
"id": 38291975,
"login": "gojiteji",
"node_id": "MDQ6VXNlcjM4MjkxOTc1",
"organizations_url": "https://api.github.com/users/gojiteji/orgs",
"received_events_url": "https://api.github.com/users/gojiteji/received_events",
"repos_url": "https://api.github.com/users/gojiteji/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gojiteji/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gojiteji/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gojiteji",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Duplicate:\r\n- #4806 "
] | 2022-08-09T06:38:13Z
| 2022-08-09T07:28:03Z
| 2022-08-09T07:28:03Z
|
CONTRIBUTOR
| null | null | null |
I fixed a issue #4805.
I changed `"gnome"` to `"opus_gnome"` in[ README.md](https://github.com/huggingface/datasets/tree/main/datasets/opus_gnome#dataset-summary).
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4807/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4807/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4807.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4807",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4807.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4807"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5327
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5327/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5327/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5327/events
|
https://github.com/huggingface/datasets/pull/5327
| 1,471,657,247
|
PR_kwDODunzps5EE_3Q
| 5,327
|
Avoid unwanted behaviour when splits from script and metadata are not matching because of outdated metadata
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna",
"user_view_type": "public"
}
|
[] |
open
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna",
"user_view_type": "public"
}
] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5327). All of your documentation changes will be reflected on that endpoint."
] | 2022-12-01T17:05:23Z
| 2023-01-23T12:48:29Z
| null |
CONTRIBUTOR
| null | null | null |
will fix #5315
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5327/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5327/timeline
| null | null | 1
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5327.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5327",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5327.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5327"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5740
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5740/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5740/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5740/events
|
https://github.com/huggingface/datasets/pull/5740
| 1,664,132,130
|
PR_kwDODunzps5OHI08
| 5,740
|
Fix CI mock filesystem fixtures
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007003 / 0.011353 (-0.004350) | 0.004854 / 0.011008 (-0.006154) | 0.096982 / 0.038508 (0.058474) | 0.033218 / 0.023109 (0.010109) | 0.314088 / 0.275898 (0.038190) | 0.351315 / 0.323480 (0.027835) | 0.005679 / 0.007986 (-0.002307) | 0.005404 / 0.004328 (0.001075) | 0.071773 / 0.004250 (0.067522) | 0.044593 / 0.037052 (0.007540) | 0.323643 / 0.258489 (0.065154) | 0.357172 / 0.293841 (0.063331) | 0.036782 / 0.128546 (-0.091764) | 0.012146 / 0.075646 (-0.063501) | 0.334874 / 0.419271 (-0.084397) | 0.051475 / 0.043533 (0.007942) | 0.305949 / 0.255139 (0.050810) | 0.339326 / 0.283200 (0.056126) | 0.101509 / 0.141683 (-0.040174) | 1.458254 / 1.452155 (0.006099) | 1.535252 / 1.492716 (0.042535) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.264837 / 0.018006 (0.246831) | 0.441444 / 0.000490 (0.440955) | 0.003331 / 0.000200 (0.003131) | 0.000084 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026529 / 0.037411 (-0.010882) | 0.105924 / 0.014526 (0.091398) | 0.117191 / 0.176557 (-0.059365) | 0.176606 / 0.737135 (-0.560529) | 0.123452 / 0.296338 (-0.172887) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.412351 / 0.215209 (0.197142) | 4.135468 / 2.077655 (2.057813) | 1.912820 / 1.504120 (0.408700) | 1.738993 / 1.541195 (0.197798) | 1.754228 / 1.468490 (0.285738) | 0.692239 / 4.584777 (-3.892538) | 3.765672 / 3.745712 (0.019959) | 2.081141 / 5.269862 (-3.188720) | 1.425153 / 4.565676 (-3.140523) | 0.085055 / 0.424275 (-0.339220) | 0.011918 / 0.007607 (0.004311) | 0.517573 / 0.226044 (0.291529) | 5.179809 / 2.268929 (2.910881) | 2.471620 / 55.444624 (-52.973005) | 2.140634 / 6.876477 (-4.735843) | 2.200150 / 2.142072 (0.058077) | 0.831662 / 4.805227 (-3.973566) | 0.168828 / 6.500664 (-6.331836) | 0.062755 / 0.075469 (-0.012714) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.196890 / 1.841788 (-0.644898) | 14.826423 / 8.074308 (6.752114) | 14.020782 / 10.191392 (3.829390) | 0.161275 / 0.680424 (-0.519149) | 0.017467 / 0.534201 (-0.516734) | 0.422278 / 0.579283 (-0.157005) | 0.424053 / 0.434364 (-0.010311) | 0.490768 / 0.540337 (-0.049570) | 0.584490 / 1.386936 (-0.802446) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007102 / 0.011353 (-0.004250) | 0.005145 / 0.011008 (-0.005863) | 0.073823 / 0.038508 (0.035315) | 0.032947 / 0.023109 (0.009838) | 0.336978 / 0.275898 (0.061080) | 0.368961 / 0.323480 (0.045481) | 0.006052 / 0.007986 (-0.001934) | 0.003970 / 0.004328 (-0.000358) | 0.072925 / 0.004250 (0.068674) | 0.044502 / 0.037052 (0.007450) | 0.340849 / 0.258489 (0.082360) | 0.381487 / 0.293841 (0.087646) | 0.037207 / 0.128546 (-0.091339) | 0.012095 / 0.075646 (-0.063551) | 0.085206 / 0.419271 (-0.334065) | 0.056236 / 0.043533 (0.012703) | 0.334048 / 0.255139 (0.078909) | 0.360442 / 0.283200 (0.077242) | 0.104402 / 0.141683 (-0.037281) | 1.446907 / 1.452155 (-0.005248) | 1.542430 / 1.492716 (0.049713) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.238720 / 0.018006 (0.220714) | 0.445857 / 0.000490 (0.445367) | 0.009280 / 0.000200 (0.009080) | 0.000150 / 0.000054 (0.000095) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028414 / 0.037411 (-0.008998) | 0.110506 / 0.014526 (0.095981) | 0.124593 / 0.176557 (-0.051964) | 0.170951 / 0.737135 (-0.566184) | 0.128033 / 0.296338 (-0.168305) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426206 / 0.215209 (0.210997) | 4.267289 / 2.077655 (2.189634) | 2.026880 / 1.504120 (0.522760) | 1.844052 / 1.541195 (0.302858) | 1.897697 / 1.468490 (0.429207) | 0.713545 / 4.584777 (-3.871232) | 3.815052 / 3.745712 (0.069339) | 3.217091 / 5.269862 (-2.052770) | 1.790546 / 4.565676 (-2.775130) | 0.087501 / 0.424275 (-0.336774) | 0.012136 / 0.007607 (0.004529) | 0.534495 / 0.226044 (0.308451) | 5.325913 / 2.268929 (3.056984) | 2.484309 / 55.444624 (-52.960315) | 2.149721 / 6.876477 (-4.726756) | 2.158764 / 2.142072 (0.016692) | 0.855273 / 4.805227 (-3.949954) | 0.170374 / 6.500664 (-6.330290) | 0.064053 / 0.075469 (-0.011416) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.253171 / 1.841788 (-0.588617) | 15.254562 / 8.074308 (7.180254) | 14.242119 / 10.191392 (4.050727) | 0.159298 / 0.680424 (-0.521126) | 0.017504 / 0.534201 (-0.516696) | 0.419710 / 0.579283 (-0.159574) | 0.417879 / 0.434364 (-0.016485) | 0.486328 / 0.540337 (-0.054009) | 0.578933 / 1.386936 (-0.808003) |\n\n</details>\n</details>\n\n\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008476 / 0.011353 (-0.002877) | 0.005745 / 0.011008 (-0.005263) | 0.115307 / 0.038508 (0.076799) | 0.039356 / 0.023109 (0.016247) | 0.367155 / 0.275898 (0.091257) | 0.422147 / 0.323480 (0.098667) | 0.006817 / 0.007986 (-0.001168) | 0.004652 / 0.004328 (0.000323) | 0.084045 / 0.004250 (0.079795) | 0.055483 / 0.037052 (0.018431) | 0.364249 / 0.258489 (0.105760) | 0.415975 / 0.293841 (0.122134) | 0.041322 / 0.128546 (-0.087224) | 0.014178 / 0.075646 (-0.061469) | 0.392658 / 0.419271 (-0.026614) | 0.060156 / 0.043533 (0.016623) | 0.373938 / 0.255139 (0.118799) | 0.397494 / 0.283200 (0.114294) | 0.113811 / 0.141683 (-0.027872) | 1.688581 / 1.452155 (0.236427) | 1.790374 / 1.492716 (0.297658) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222203 / 0.018006 (0.204196) | 0.471109 / 0.000490 (0.470619) | 0.007071 / 0.000200 (0.006871) | 0.000156 / 0.000054 (0.000102) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032112 / 0.037411 (-0.005299) | 0.118726 / 0.014526 (0.104200) | 0.134918 / 0.176557 (-0.041639) | 0.207766 / 0.737135 (-0.529369) | 0.139756 / 0.296338 (-0.156582) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.479858 / 0.215209 (0.264649) | 4.798428 / 2.077655 (2.720773) | 2.221573 / 1.504120 (0.717453) | 1.964956 / 1.541195 (0.423761) | 2.021763 / 1.468490 (0.553273) | 0.820401 / 4.584777 (-3.764376) | 4.533887 / 3.745712 (0.788175) | 4.121332 / 5.269862 (-1.148529) | 2.195807 / 4.565676 (-2.369869) | 0.103133 / 0.424275 (-0.321142) | 0.014620 / 0.007607 (0.007013) | 0.605012 / 0.226044 (0.378967) | 5.966623 / 2.268929 (3.697694) | 2.844118 / 55.444624 (-52.600506) | 2.463569 / 6.876477 (-4.412907) | 2.597177 / 2.142072 (0.455105) | 0.983201 / 4.805227 (-3.822026) | 0.199500 / 6.500664 (-6.301164) | 0.078387 / 0.075469 (0.002918) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.401083 / 1.841788 (-0.440705) | 17.258725 / 8.074308 (9.184417) | 16.825992 / 10.191392 (6.634600) | 0.216762 / 0.680424 (-0.463662) | 0.021135 / 0.534201 (-0.513066) | 0.513688 / 0.579283 (-0.065595) | 0.488892 / 0.434364 (0.054529) | 0.566745 / 0.540337 (0.026408) | 0.688958 / 1.386936 (-0.697978) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007948 / 0.011353 (-0.003405) | 0.005981 / 0.011008 (-0.005027) | 0.084474 / 0.038508 (0.045966) | 0.037952 / 0.023109 (0.014843) | 0.383359 / 0.275898 (0.107461) | 0.409324 / 0.323480 (0.085844) | 0.006641 / 0.007986 (-0.001344) | 0.004785 / 0.004328 (0.000456) | 0.083214 / 0.004250 (0.078964) | 0.053177 / 0.037052 (0.016125) | 0.393147 / 0.258489 (0.134658) | 0.438496 / 0.293841 (0.144655) | 0.042090 / 0.128546 (-0.086456) | 0.013373 / 0.075646 (-0.062273) | 0.097585 / 0.419271 (-0.321686) | 0.056359 / 0.043533 (0.012826) | 0.378113 / 0.255139 (0.122974) | 0.403874 / 0.283200 (0.120674) | 0.123503 / 0.141683 (-0.018180) | 1.639557 / 1.452155 (0.187403) | 1.759787 / 1.492716 (0.267071) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.242534 / 0.018006 (0.224528) | 0.459040 / 0.000490 (0.458550) | 0.000454 / 0.000200 (0.000254) | 0.000066 / 0.000054 (0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031747 / 0.037411 (-0.005664) | 0.125823 / 0.014526 (0.111297) | 0.138985 / 0.176557 (-0.037571) | 0.194371 / 0.737135 (-0.542764) | 0.148905 / 0.296338 (-0.147433) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.508201 / 0.215209 (0.292992) | 5.007519 / 2.077655 (2.929865) | 2.412956 / 1.504120 (0.908836) | 2.143378 / 1.541195 (0.602183) | 2.192966 / 1.468490 (0.724476) | 0.828497 / 4.584777 (-3.756280) | 4.496457 / 3.745712 (0.750745) | 2.397546 / 5.269862 (-2.872315) | 1.522889 / 4.565676 (-3.042787) | 0.099904 / 0.424275 (-0.324371) | 0.014561 / 0.007607 (0.006954) | 0.627417 / 0.226044 (0.401373) | 6.296441 / 2.268929 (4.027512) | 2.962858 / 55.444624 (-52.481767) | 2.543083 / 6.876477 (-4.333394) | 2.711884 / 2.142072 (0.569811) | 0.997969 / 4.805227 (-3.807259) | 0.200283 / 6.500664 (-6.300382) | 0.075934 / 0.075469 (0.000465) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.541707 / 1.841788 (-0.300081) | 17.791559 / 8.074308 (9.717251) | 16.782877 / 10.191392 (6.591485) | 0.171954 / 0.680424 (-0.508470) | 0.020506 / 0.534201 (-0.513695) | 0.504189 / 0.579283 (-0.075094) | 0.501655 / 0.434364 (0.067291) | 0.583120 / 0.540337 (0.042782) | 0.694931 / 1.386936 (-0.692005) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007613 / 0.011353 (-0.003740) | 0.005057 / 0.011008 (-0.005951) | 0.099147 / 0.038508 (0.060639) | 0.035358 / 0.023109 (0.012249) | 0.303442 / 0.275898 (0.027544) | 0.336898 / 0.323480 (0.013418) | 0.006216 / 0.007986 (-0.001770) | 0.004085 / 0.004328 (-0.000244) | 0.074567 / 0.004250 (0.070317) | 0.050917 / 0.037052 (0.013865) | 0.301786 / 0.258489 (0.043297) | 0.341362 / 0.293841 (0.047521) | 0.037019 / 0.128546 (-0.091528) | 0.011977 / 0.075646 (-0.063669) | 0.334688 / 0.419271 (-0.084583) | 0.051326 / 0.043533 (0.007793) | 0.299878 / 0.255139 (0.044739) | 0.325571 / 0.283200 (0.042371) | 0.110744 / 0.141683 (-0.030939) | 1.480898 / 1.452155 (0.028743) | 1.566917 / 1.492716 (0.074201) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.253249 / 0.018006 (0.235242) | 0.558576 / 0.000490 (0.558086) | 0.003838 / 0.000200 (0.003638) | 0.000085 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028731 / 0.037411 (-0.008681) | 0.110643 / 0.014526 (0.096117) | 0.119560 / 0.176557 (-0.056996) | 0.178010 / 0.737135 (-0.559126) | 0.130286 / 0.296338 (-0.166053) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.400190 / 0.215209 (0.184981) | 3.999326 / 2.077655 (1.921672) | 1.797332 / 1.504120 (0.293212) | 1.610808 / 1.541195 (0.069613) | 1.679949 / 1.468490 (0.211459) | 0.696539 / 4.584777 (-3.888238) | 3.784766 / 3.745712 (0.039054) | 2.205008 / 5.269862 (-3.064854) | 1.501697 / 4.565676 (-3.063979) | 0.085553 / 0.424275 (-0.338723) | 0.012223 / 0.007607 (0.004616) | 0.494858 / 0.226044 (0.268813) | 4.968535 / 2.268929 (2.699606) | 2.258759 / 55.444624 (-53.185865) | 1.926236 / 6.876477 (-4.950241) | 2.072155 / 2.142072 (-0.069917) | 0.838354 / 4.805227 (-3.966873) | 0.168810 / 6.500664 (-6.331854) | 0.064347 / 0.075469 (-0.011122) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.166696 / 1.841788 (-0.675091) | 14.721287 / 8.074308 (6.646979) | 14.319272 / 10.191392 (4.127880) | 0.144534 / 0.680424 (-0.535890) | 0.017502 / 0.534201 (-0.516699) | 0.422682 / 0.579283 (-0.156601) | 0.424426 / 0.434364 (-0.009938) | 0.493561 / 0.540337 (-0.046777) | 0.586765 / 1.386936 (-0.800171) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007764 / 0.011353 (-0.003589) | 0.005516 / 0.011008 (-0.005492) | 0.074745 / 0.038508 (0.036237) | 0.034364 / 0.023109 (0.011255) | 0.344318 / 0.275898 (0.068420) | 0.374779 / 0.323480 (0.051299) | 0.005904 / 0.007986 (-0.002082) | 0.004323 / 0.004328 (-0.000005) | 0.073191 / 0.004250 (0.068941) | 0.051549 / 0.037052 (0.014496) | 0.341792 / 0.258489 (0.083303) | 0.387576 / 0.293841 (0.093735) | 0.037483 / 0.128546 (-0.091063) | 0.012410 / 0.075646 (-0.063237) | 0.086480 / 0.419271 (-0.332791) | 0.050035 / 0.043533 (0.006502) | 0.335475 / 0.255139 (0.080336) | 0.361436 / 0.283200 (0.078236) | 0.106890 / 0.141683 (-0.034792) | 1.464032 / 1.452155 (0.011877) | 1.563490 / 1.492716 (0.070774) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.268765 / 0.018006 (0.250758) | 0.563811 / 0.000490 (0.563321) | 0.004904 / 0.000200 (0.004704) | 0.000096 / 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029885 / 0.037411 (-0.007526) | 0.113885 / 0.014526 (0.099359) | 0.124283 / 0.176557 (-0.052274) | 0.173619 / 0.737135 (-0.563517) | 0.131781 / 0.296338 (-0.164557) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420296 / 0.215209 (0.205087) | 4.167656 / 2.077655 (2.090001) | 1.982356 / 1.504120 (0.478237) | 1.792181 / 1.541195 (0.250986) | 1.871459 / 1.468490 (0.402969) | 0.707066 / 4.584777 (-3.877711) | 3.835922 / 3.745712 (0.090210) | 3.506796 / 5.269862 (-1.763066) | 1.857172 / 4.565676 (-2.708505) | 0.086219 / 0.424275 (-0.338056) | 0.012404 / 0.007607 (0.004796) | 0.512393 / 0.226044 (0.286348) | 5.111623 / 2.268929 (2.842695) | 2.493523 / 55.444624 (-52.951101) | 2.188220 / 6.876477 (-4.688257) | 2.319096 / 2.142072 (0.177024) | 0.844084 / 4.805227 (-3.961144) | 0.171130 / 6.500664 (-6.329534) | 0.065913 / 0.075469 (-0.009556) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.284768 / 1.841788 (-0.557020) | 15.334610 / 8.074308 (7.260301) | 14.724436 / 10.191392 (4.533044) | 0.188425 / 0.680424 (-0.491999) | 0.017984 / 0.534201 (-0.516217) | 0.428150 / 0.579283 (-0.151133) | 0.429013 / 0.434364 (-0.005351) | 0.500818 / 0.540337 (-0.039519) | 0.592879 / 1.386936 (-0.794057) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006870 / 0.011353 (-0.004483) | 0.004702 / 0.011008 (-0.006306) | 0.099258 / 0.038508 (0.060750) | 0.029008 / 0.023109 (0.005899) | 0.330599 / 0.275898 (0.054701) | 0.361163 / 0.323480 (0.037683) | 0.005020 / 0.007986 (-0.002965) | 0.003474 / 0.004328 (-0.000855) | 0.075902 / 0.004250 (0.071651) | 0.037462 / 0.037052 (0.000410) | 0.336213 / 0.258489 (0.077724) | 0.370645 / 0.293841 (0.076804) | 0.032435 / 0.128546 (-0.096111) | 0.011686 / 0.075646 (-0.063960) | 0.326040 / 0.419271 (-0.093232) | 0.043750 / 0.043533 (0.000217) | 0.332629 / 0.255139 (0.077490) | 0.353302 / 0.283200 (0.070102) | 0.090421 / 0.141683 (-0.051262) | 1.470097 / 1.452155 (0.017942) | 1.544908 / 1.492716 (0.052191) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.213418 / 0.018006 (0.195411) | 0.434808 / 0.000490 (0.434319) | 0.005949 / 0.000200 (0.005749) | 0.000072 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023085 / 0.037411 (-0.014327) | 0.098222 / 0.014526 (0.083696) | 0.104543 / 0.176557 (-0.072013) | 0.165423 / 0.737135 (-0.571713) | 0.108732 / 0.296338 (-0.187606) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.433933 / 0.215209 (0.218724) | 4.334358 / 2.077655 (2.256704) | 2.013984 / 1.504120 (0.509864) | 1.862981 / 1.541195 (0.321787) | 1.873936 / 1.468490 (0.405446) | 0.699857 / 4.584777 (-3.884920) | 3.417815 / 3.745712 (-0.327897) | 1.946403 / 5.269862 (-3.323459) | 1.308683 / 4.565676 (-3.256994) | 0.083297 / 0.424275 (-0.340978) | 0.012610 / 0.007607 (0.005003) | 0.540877 / 0.226044 (0.314832) | 5.408293 / 2.268929 (3.139365) | 2.529574 / 55.444624 (-52.915050) | 2.201047 / 6.876477 (-4.675429) | 2.392966 / 2.142072 (0.250894) | 0.812719 / 4.805227 (-3.992509) | 0.154013 / 6.500664 (-6.346651) | 0.067614 / 0.075469 (-0.007855) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.228150 / 1.841788 (-0.613638) | 14.037090 / 8.074308 (5.962782) | 14.259416 / 10.191392 (4.068024) | 0.155554 / 0.680424 (-0.524870) | 0.016521 / 0.534201 (-0.517680) | 0.379615 / 0.579283 (-0.199668) | 0.421352 / 0.434364 (-0.013012) | 0.446512 / 0.540337 (-0.093825) | 0.531802 / 1.386936 (-0.855134) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006629 / 0.011353 (-0.004724) | 0.004432 / 0.011008 (-0.006577) | 0.076662 / 0.038508 (0.038154) | 0.027674 / 0.023109 (0.004565) | 0.341667 / 0.275898 (0.065769) | 0.376493 / 0.323480 (0.053014) | 0.005076 / 0.007986 (-0.002910) | 0.004655 / 0.004328 (0.000326) | 0.075698 / 0.004250 (0.071448) | 0.036905 / 0.037052 (-0.000147) | 0.342394 / 0.258489 (0.083905) | 0.383330 / 0.293841 (0.089489) | 0.031729 / 0.128546 (-0.096817) | 0.011582 / 0.075646 (-0.064064) | 0.085721 / 0.419271 (-0.333551) | 0.042012 / 0.043533 (-0.001521) | 0.342063 / 0.255139 (0.086924) | 0.367335 / 0.283200 (0.084136) | 0.089641 / 0.141683 (-0.052042) | 1.520353 / 1.452155 (0.068198) | 1.643653 / 1.492716 (0.150937) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.178995 / 0.018006 (0.160989) | 0.436544 / 0.000490 (0.436055) | 0.002311 / 0.000200 (0.002111) | 0.000081 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025386 / 0.037411 (-0.012026) | 0.099717 / 0.014526 (0.085192) | 0.110809 / 0.176557 (-0.065747) | 0.162931 / 0.737135 (-0.574204) | 0.110430 / 0.296338 (-0.185909) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.438592 / 0.215209 (0.223382) | 4.372560 / 2.077655 (2.294905) | 2.069686 / 1.504120 (0.565567) | 1.860576 / 1.541195 (0.319382) | 1.898161 / 1.468490 (0.429671) | 0.698353 / 4.584777 (-3.886424) | 3.462440 / 3.745712 (-0.283272) | 1.868602 / 5.269862 (-3.401260) | 1.160498 / 4.565676 (-3.405179) | 0.082869 / 0.424275 (-0.341406) | 0.012690 / 0.007607 (0.005083) | 0.533278 / 0.226044 (0.307233) | 5.386214 / 2.268929 (3.117285) | 2.519243 / 55.444624 (-52.925382) | 2.171109 / 6.876477 (-4.705368) | 2.272617 / 2.142072 (0.130544) | 0.805843 / 4.805227 (-3.999384) | 0.152275 / 6.500664 (-6.348389) | 0.068038 / 0.075469 (-0.007431) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.291967 / 1.841788 (-0.549821) | 14.386474 / 8.074308 (6.312166) | 14.180693 / 10.191392 (3.989301) | 0.131714 / 0.680424 (-0.548710) | 0.016596 / 0.534201 (-0.517605) | 0.384293 / 0.579283 (-0.194990) | 0.404051 / 0.434364 (-0.030313) | 0.452167 / 0.540337 (-0.088170) | 0.542718 / 1.386936 (-0.844218) |\n\n</details>\n</details>\n\n\n"
] | 2023-04-12T08:52:35Z
| 2023-04-13T11:01:24Z
| 2023-04-13T10:54:13Z
|
MEMBER
| null | null | null |
This PR fixes the fixtures of our CI mock filesystems.
Before, we had to pass `clobber=True` to `fsspec.register_implementation` to overwrite the still present previously added "mock" filesystem. That meant that the mock filesystem fixture was not working properly, because the previously added "mock" filesystem, should have been deleted by the fixture.
This PR fixes the mock filesystem fixtures, so that the "mock" filesystem is properly deleted from the inner `fsspec` registry.
Tests were added to check the correct behavior of the mock filesystem fixtures.
Related to:
- #5733
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5740/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5740/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5740.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5740",
"merged_at": "2023-04-13T10:54:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5740.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5740"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6223
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6223/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6223/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6223/events
|
https://github.com/huggingface/datasets/pull/6223
| 1,885,710,696
|
PR_kwDODunzps5Zxd5c
| 6,223
|
Update README.md
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/95188570?v=4",
"events_url": "https://api.github.com/users/NinoRisteski/events{/privacy}",
"followers_url": "https://api.github.com/users/NinoRisteski/followers",
"following_url": "https://api.github.com/users/NinoRisteski/following{/other_user}",
"gists_url": "https://api.github.com/users/NinoRisteski/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/NinoRisteski",
"id": 95188570,
"login": "NinoRisteski",
"node_id": "U_kgDOBax2Wg",
"organizations_url": "https://api.github.com/users/NinoRisteski/orgs",
"received_events_url": "https://api.github.com/users/NinoRisteski/received_events",
"repos_url": "https://api.github.com/users/NinoRisteski/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/NinoRisteski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NinoRisteski/subscriptions",
"type": "User",
"url": "https://api.github.com/users/NinoRisteski",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006757 / 0.011353 (-0.004596) | 0.004233 / 0.011008 (-0.006775) | 0.084123 / 0.038508 (0.045614) | 0.077513 / 0.023109 (0.054404) | 0.357024 / 0.275898 (0.081126) | 0.392956 / 0.323480 (0.069476) | 0.005408 / 0.007986 (-0.002577) | 0.003363 / 0.004328 (-0.000966) | 0.064395 / 0.004250 (0.060145) | 0.054711 / 0.037052 (0.017659) | 0.367287 / 0.258489 (0.108798) | 0.402934 / 0.293841 (0.109093) | 0.031845 / 0.128546 (-0.096701) | 0.008646 / 0.075646 (-0.067000) | 0.288740 / 0.419271 (-0.130531) | 0.053171 / 0.043533 (0.009638) | 0.360711 / 0.255139 (0.105572) | 0.388707 / 0.283200 (0.105507) | 0.025321 / 0.141683 (-0.116361) | 1.500684 / 1.452155 (0.048529) | 1.585747 / 1.492716 (0.093030) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207329 / 0.018006 (0.189323) | 0.465304 / 0.000490 (0.464814) | 0.003229 / 0.000200 (0.003029) | 0.000080 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028752 / 0.037411 (-0.008659) | 0.085327 / 0.014526 (0.070802) | 0.332210 / 0.176557 (0.155653) | 0.178779 / 0.737135 (-0.558356) | 0.097765 / 0.296338 (-0.198573) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.403710 / 0.215209 (0.188501) | 4.027069 / 2.077655 (1.949414) | 2.053451 / 1.504120 (0.549331) | 1.906647 / 1.541195 (0.365452) | 1.992507 / 1.468490 (0.524017) | 0.490203 / 4.584777 (-4.094574) | 3.696569 / 3.745712 (-0.049143) | 3.319919 / 5.269862 (-1.949943) | 2.072794 / 4.565676 (-2.492883) | 0.057893 / 0.424275 (-0.366383) | 0.007723 / 0.007607 (0.000116) | 0.485400 / 0.226044 (0.259355) | 4.842891 / 2.268929 (2.573963) | 2.578949 / 55.444624 (-52.865675) | 2.229217 / 6.876477 (-4.647259) | 2.468017 / 2.142072 (0.325945) | 0.595236 / 4.805227 (-4.209992) | 0.135641 / 6.500664 (-6.365023) | 0.061232 / 0.075469 (-0.014237) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.307059 / 1.841788 (-0.534729) | 20.108581 / 8.074308 (12.034273) | 14.438985 / 10.191392 (4.247593) | 0.168878 / 0.680424 (-0.511545) | 0.018208 / 0.534201 (-0.515993) | 0.395986 / 0.579283 (-0.183297) | 0.427440 / 0.434364 (-0.006924) | 0.459917 / 0.540337 (-0.080421) | 0.631379 / 1.386936 (-0.755557) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007002 / 0.011353 (-0.004351) | 0.004120 / 0.011008 (-0.006888) | 0.064817 / 0.038508 (0.026309) | 0.081297 / 0.023109 (0.058188) | 0.405598 / 0.275898 (0.129700) | 0.442360 / 0.323480 (0.118880) | 0.005475 / 0.007986 (-0.002511) | 0.003483 / 0.004328 (-0.000845) | 0.064750 / 0.004250 (0.060499) | 0.058111 / 0.037052 (0.021059) | 0.410154 / 0.258489 (0.151665) | 0.445137 / 0.293841 (0.151296) | 0.033314 / 0.128546 (-0.095232) | 0.008747 / 0.075646 (-0.066899) | 0.071595 / 0.419271 (-0.347676) | 0.048894 / 0.043533 (0.005361) | 0.409162 / 0.255139 (0.154023) | 0.428877 / 0.283200 (0.145677) | 0.024127 / 0.141683 (-0.117556) | 1.521369 / 1.452155 (0.069214) | 1.573505 / 1.492716 (0.080789) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.233199 / 0.018006 (0.215193) | 0.455619 / 0.000490 (0.455129) | 0.003688 / 0.000200 (0.003488) | 0.000081 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033186 / 0.037411 (-0.004225) | 0.100528 / 0.014526 (0.086003) | 0.105617 / 0.176557 (-0.070940) | 0.159437 / 0.737135 (-0.577698) | 0.108064 / 0.296338 (-0.188274) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.435509 / 0.215209 (0.220300) | 4.339920 / 2.077655 (2.262265) | 2.368983 / 1.504120 (0.864863) | 2.211761 / 1.541195 (0.670566) | 2.301701 / 1.468490 (0.833211) | 0.495144 / 4.584777 (-4.089633) | 3.768882 / 3.745712 (0.023170) | 3.348940 / 5.269862 (-1.920922) | 2.081142 / 4.565676 (-2.484534) | 0.058184 / 0.424275 (-0.366091) | 0.007597 / 0.007607 (-0.000010) | 0.508806 / 0.226044 (0.282762) | 5.089226 / 2.268929 (2.820297) | 2.851930 / 55.444624 (-52.592694) | 2.512144 / 6.876477 (-4.364332) | 2.724461 / 2.142072 (0.582388) | 0.593446 / 4.805227 (-4.211781) | 0.134908 / 6.500664 (-6.365756) | 0.060811 / 0.075469 (-0.014658) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.362279 / 1.841788 (-0.479508) | 20.548216 / 8.074308 (12.473908) | 15.179181 / 10.191392 (4.987789) | 0.170249 / 0.680424 (-0.510175) | 0.020772 / 0.534201 (-0.513429) | 0.398737 / 0.579283 (-0.180546) | 0.441487 / 0.434364 (0.007124) | 0.480096 / 0.540337 (-0.060242) | 0.645825 / 1.386936 (-0.741111) |\n\n</details>\n</details>\n\n\n"
] | 2023-09-07T11:33:20Z
| 2023-09-13T22:32:31Z
| 2023-09-13T22:23:42Z
|
CONTRIBUTOR
| null | null | null |
fixed a few typos
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6223/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6223/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6223.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6223",
"merged_at": "2023-09-13T22:23:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6223.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6223"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6265
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6265/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6265/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6265/events
|
https://github.com/huggingface/datasets/pull/6265
| 1,915,651,566
|
PR_kwDODunzps5bWDfc
| 6,265
|
Remove `apache_beam` import in `BeamBasedBuilder._save_info`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005896 / 0.011353 (-0.005457) | 0.003642 / 0.011008 (-0.007366) | 0.081917 / 0.038508 (0.043409) | 0.059513 / 0.023109 (0.036404) | 0.341422 / 0.275898 (0.065524) | 0.359278 / 0.323480 (0.035798) | 0.004707 / 0.007986 (-0.003279) | 0.002938 / 0.004328 (-0.001390) | 0.063095 / 0.004250 (0.058845) | 0.051777 / 0.037052 (0.014725) | 0.321114 / 0.258489 (0.062625) | 0.363823 / 0.293841 (0.069982) | 0.027590 / 0.128546 (-0.100957) | 0.007846 / 0.075646 (-0.067800) | 0.261197 / 0.419271 (-0.158074) | 0.045812 / 0.043533 (0.002279) | 0.319787 / 0.255139 (0.064648) | 0.341839 / 0.283200 (0.058640) | 0.021913 / 0.141683 (-0.119770) | 1.397525 / 1.452155 (-0.054630) | 1.495902 / 1.492716 (0.003186) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224815 / 0.018006 (0.206809) | 0.425780 / 0.000490 (0.425290) | 0.006934 / 0.000200 (0.006734) | 0.000225 / 0.000054 (0.000171) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024342 / 0.037411 (-0.013070) | 0.073923 / 0.014526 (0.059398) | 0.082108 / 0.176557 (-0.094448) | 0.143017 / 0.737135 (-0.594119) | 0.083163 / 0.296338 (-0.213175) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.398244 / 0.215209 (0.183035) | 3.957688 / 2.077655 (1.880033) | 1.904615 / 1.504120 (0.400495) | 1.710353 / 1.541195 (0.169158) | 1.798980 / 1.468490 (0.330490) | 0.499307 / 4.584777 (-4.085470) | 3.026734 / 3.745712 (-0.718978) | 2.923940 / 5.269862 (-2.345922) | 1.831870 / 4.565676 (-2.733807) | 0.058551 / 0.424275 (-0.365724) | 0.006403 / 0.007607 (-0.001204) | 0.464164 / 0.226044 (0.238119) | 4.644556 / 2.268929 (2.375628) | 2.341455 / 55.444624 (-53.103169) | 2.004385 / 6.876477 (-4.872092) | 2.051819 / 2.142072 (-0.090253) | 0.585610 / 4.805227 (-4.219617) | 0.124735 / 6.500664 (-6.375929) | 0.061150 / 0.075469 (-0.014319) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.224665 / 1.841788 (-0.617122) | 17.476227 / 8.074308 (9.401919) | 13.867617 / 10.191392 (3.676225) | 0.144177 / 0.680424 (-0.536247) | 0.017045 / 0.534201 (-0.517156) | 0.337468 / 0.579283 (-0.241815) | 0.374476 / 0.434364 (-0.059888) | 0.393428 / 0.540337 (-0.146910) | 0.535335 / 1.386936 (-0.851601) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006208 / 0.011353 (-0.005145) | 0.003650 / 0.011008 (-0.007359) | 0.062843 / 0.038508 (0.024335) | 0.062272 / 0.023109 (0.039162) | 0.446336 / 0.275898 (0.170438) | 0.477476 / 0.323480 (0.153996) | 0.004862 / 0.007986 (-0.003124) | 0.002822 / 0.004328 (-0.001506) | 0.063427 / 0.004250 (0.059177) | 0.049023 / 0.037052 (0.011971) | 0.453633 / 0.258489 (0.195144) | 0.486494 / 0.293841 (0.192653) | 0.028634 / 0.128546 (-0.099912) | 0.008187 / 0.075646 (-0.067460) | 0.068846 / 0.419271 (-0.350425) | 0.041104 / 0.043533 (-0.002429) | 0.446646 / 0.255139 (0.191507) | 0.468860 / 0.283200 (0.185660) | 0.020980 / 0.141683 (-0.120703) | 1.455565 / 1.452155 (0.003410) | 1.511142 / 1.492716 (0.018426) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224242 / 0.018006 (0.206236) | 0.408483 / 0.000490 (0.407993) | 0.003495 / 0.000200 (0.003296) | 0.000076 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027286 / 0.037411 (-0.010125) | 0.081151 / 0.014526 (0.066625) | 0.096598 / 0.176557 (-0.079959) | 0.146193 / 0.737135 (-0.590942) | 0.092213 / 0.296338 (-0.204125) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.463837 / 0.215209 (0.248628) | 4.636820 / 2.077655 (2.559165) | 2.576100 / 1.504120 (1.071980) | 2.396974 / 1.541195 (0.855779) | 2.461526 / 1.468490 (0.993036) | 0.502360 / 4.584777 (-4.082417) | 3.099973 / 3.745712 (-0.645739) | 2.937260 / 5.269862 (-2.332602) | 1.871274 / 4.565676 (-2.694402) | 0.057913 / 0.424275 (-0.366362) | 0.006511 / 0.007607 (-0.001096) | 0.536917 / 0.226044 (0.310873) | 5.396966 / 2.268929 (3.128038) | 3.015646 / 55.444624 (-52.428978) | 2.673793 / 6.876477 (-4.202684) | 2.712376 / 2.142072 (0.570304) | 0.591632 / 4.805227 (-4.213595) | 0.124872 / 6.500664 (-6.375792) | 0.061820 / 0.075469 (-0.013649) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.356828 / 1.841788 (-0.484960) | 18.076995 / 8.074308 (10.002687) | 15.116482 / 10.191392 (4.925090) | 0.151375 / 0.680424 (-0.529049) | 0.017867 / 0.534201 (-0.516334) | 0.335012 / 0.579283 (-0.244271) | 0.384137 / 0.434364 (-0.050226) | 0.397792 / 0.540337 (-0.142546) | 0.551521 / 1.386936 (-0.835415) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009418 / 0.011353 (-0.001935) | 0.005186 / 0.011008 (-0.005822) | 0.112270 / 0.038508 (0.073761) | 0.114856 / 0.023109 (0.091747) | 0.402267 / 0.275898 (0.126369) | 0.445213 / 0.323480 (0.121733) | 0.005588 / 0.007986 (-0.002398) | 0.004315 / 0.004328 (-0.000013) | 0.083561 / 0.004250 (0.079311) | 0.087319 / 0.037052 (0.050267) | 0.400989 / 0.258489 (0.142500) | 0.455636 / 0.293841 (0.161795) | 0.045168 / 0.128546 (-0.083378) | 0.010939 / 0.075646 (-0.064707) | 0.400120 / 0.419271 (-0.019151) | 0.071599 / 0.043533 (0.028066) | 0.418112 / 0.255139 (0.162973) | 0.443889 / 0.283200 (0.160690) | 0.032433 / 0.141683 (-0.109250) | 1.886313 / 1.452155 (0.434159) | 2.012909 / 1.492716 (0.520193) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.306991 / 0.018006 (0.288985) | 0.590426 / 0.000490 (0.589937) | 0.011811 / 0.000200 (0.011611) | 0.000596 / 0.000054 (0.000542) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.042520 / 0.037411 (0.005108) | 0.129808 / 0.014526 (0.115283) | 0.125481 / 0.176557 (-0.051075) | 0.199181 / 0.737135 (-0.537954) | 0.130426 / 0.296338 (-0.165913) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.526455 / 0.215209 (0.311246) | 5.213304 / 2.077655 (3.135649) | 2.643406 / 1.504120 (1.139286) | 2.611214 / 1.541195 (1.070019) | 2.586730 / 1.468490 (1.118240) | 0.639103 / 4.584777 (-3.945674) | 5.197421 / 3.745712 (1.451709) | 4.634642 / 5.269862 (-0.635220) | 2.741079 / 4.565676 (-1.824598) | 0.073064 / 0.424275 (-0.351211) | 0.009441 / 0.007607 (0.001834) | 0.635984 / 0.226044 (0.409940) | 6.283268 / 2.268929 (4.014339) | 3.337205 / 55.444624 (-52.107419) | 3.192362 / 6.876477 (-3.684114) | 2.910367 / 2.142072 (0.768294) | 0.767937 / 4.805227 (-4.037290) | 0.177467 / 6.500664 (-6.323198) | 0.081162 / 0.075469 (0.005693) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.803717 / 1.841788 (-0.038071) | 26.823235 / 8.074308 (18.748927) | 19.714471 / 10.191392 (9.523079) | 0.204048 / 0.680424 (-0.476376) | 0.025992 / 0.534201 (-0.508209) | 0.521438 / 0.579283 (-0.057845) | 0.596524 / 0.434364 (0.162160) | 0.600763 / 0.540337 (0.060425) | 0.945971 / 1.386936 (-0.440965) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009126 / 0.011353 (-0.002226) | 0.005109 / 0.011008 (-0.005899) | 0.083046 / 0.038508 (0.044538) | 0.115930 / 0.023109 (0.092821) | 0.534311 / 0.275898 (0.258413) | 0.552846 / 0.323480 (0.229366) | 0.007240 / 0.007986 (-0.000746) | 0.004617 / 0.004328 (0.000289) | 0.083927 / 0.004250 (0.079676) | 0.075926 / 0.037052 (0.038873) | 0.534750 / 0.258489 (0.276261) | 0.575122 / 0.293841 (0.281281) | 0.041001 / 0.128546 (-0.087545) | 0.010851 / 0.075646 (-0.064795) | 0.096574 / 0.419271 (-0.322697) | 0.063533 / 0.043533 (0.020001) | 0.546850 / 0.255139 (0.291711) | 0.547122 / 0.283200 (0.263922) | 0.032437 / 0.141683 (-0.109245) | 1.926191 / 1.452155 (0.474036) | 2.029841 / 1.492716 (0.537125) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.275582 / 0.018006 (0.257576) | 0.574212 / 0.000490 (0.573722) | 0.006863 / 0.000200 (0.006663) | 0.000236 / 0.000054 (0.000181) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.045340 / 0.037411 (0.007928) | 0.129196 / 0.014526 (0.114670) | 0.136637 / 0.176557 (-0.039920) | 0.200040 / 0.737135 (-0.537096) | 0.136328 / 0.296338 (-0.160011) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.612379 / 0.215209 (0.397170) | 5.874664 / 2.077655 (3.797010) | 3.070626 / 1.504120 (1.566506) | 2.999319 / 1.541195 (1.458124) | 3.000571 / 1.468490 (1.532081) | 0.732119 / 4.584777 (-3.852658) | 5.193226 / 3.745712 (1.447514) | 4.714571 / 5.269862 (-0.555291) | 2.870438 / 4.565676 (-1.695239) | 0.075793 / 0.424275 (-0.348482) | 0.009238 / 0.007607 (0.001631) | 0.695192 / 0.226044 (0.469148) | 6.897996 / 2.268929 (4.629067) | 3.923474 / 55.444624 (-51.521150) | 3.458326 / 6.876477 (-3.418151) | 3.331652 / 2.142072 (1.189579) | 0.821132 / 4.805227 (-3.984095) | 0.182252 / 6.500664 (-6.318412) | 0.084730 / 0.075469 (0.009260) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.919861 / 1.841788 (0.078073) | 27.437228 / 8.074308 (19.362920) | 21.109899 / 10.191392 (10.918507) | 0.245998 / 0.680424 (-0.434426) | 0.025817 / 0.534201 (-0.508384) | 0.517757 / 0.579283 (-0.061526) | 0.576375 / 0.434364 (0.142011) | 0.625283 / 0.540337 (0.084945) | 0.956877 / 1.386936 (-0.430059) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008099 / 0.011353 (-0.003254) | 0.004815 / 0.011008 (-0.006194) | 0.099657 / 0.038508 (0.061149) | 0.064737 / 0.023109 (0.041628) | 0.461773 / 0.275898 (0.185875) | 0.444810 / 0.323480 (0.121330) | 0.004247 / 0.007986 (-0.003739) | 0.004956 / 0.004328 (0.000628) | 0.068664 / 0.004250 (0.064414) | 0.052039 / 0.037052 (0.014986) | 0.406750 / 0.258489 (0.148261) | 0.452832 / 0.293841 (0.158991) | 0.044518 / 0.128546 (-0.084028) | 0.013220 / 0.075646 (-0.062426) | 0.317713 / 0.419271 (-0.101558) | 0.061897 / 0.043533 (0.018364) | 0.398664 / 0.255139 (0.143525) | 0.531494 / 0.283200 (0.248294) | 0.064033 / 0.141683 (-0.077650) | 1.590385 / 1.452155 (0.138231) | 1.769918 / 1.492716 (0.277202) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230795 / 0.018006 (0.212789) | 0.568797 / 0.000490 (0.568308) | 0.013498 / 0.000200 (0.013298) | 0.000448 / 0.000054 (0.000393) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028394 / 0.037411 (-0.009017) | 0.081973 / 0.014526 (0.067447) | 0.097623 / 0.176557 (-0.078934) | 0.158691 / 0.737135 (-0.578445) | 0.101548 / 0.296338 (-0.194791) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.574459 / 0.215209 (0.359249) | 5.709871 / 2.077655 (3.632217) | 2.521460 / 1.504120 (1.017340) | 2.239463 / 1.541195 (0.698268) | 2.195067 / 1.468490 (0.726577) | 0.792390 / 4.584777 (-3.792387) | 4.841665 / 3.745712 (1.095952) | 4.201620 / 5.269862 (-1.068241) | 2.664081 / 4.565676 (-1.901595) | 0.097661 / 0.424275 (-0.326614) | 0.008428 / 0.007607 (0.000821) | 0.698729 / 0.226044 (0.472684) | 6.908867 / 2.268929 (4.639939) | 3.247480 / 55.444624 (-52.197145) | 2.563921 / 6.876477 (-4.312556) | 2.738249 / 2.142072 (0.596177) | 0.972066 / 4.805227 (-3.833161) | 0.191196 / 6.500664 (-6.309468) | 0.064732 / 0.075469 (-0.010737) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.421910 / 1.841788 (-0.419877) | 20.633538 / 8.074308 (12.559230) | 18.054562 / 10.191392 (7.863170) | 0.194125 / 0.680424 (-0.486299) | 0.028097 / 0.534201 (-0.506104) | 0.417857 / 0.579283 (-0.161426) | 0.518758 / 0.434364 (0.084394) | 0.500199 / 0.540337 (-0.040138) | 0.754662 / 1.386936 (-0.632274) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008452 / 0.011353 (-0.002901) | 0.004646 / 0.011008 (-0.006362) | 0.077286 / 0.038508 (0.038778) | 0.072507 / 0.023109 (0.049398) | 0.439580 / 0.275898 (0.163682) | 0.506166 / 0.323480 (0.182686) | 0.006035 / 0.007986 (-0.001950) | 0.003886 / 0.004328 (-0.000442) | 0.075091 / 0.004250 (0.070841) | 0.063163 / 0.037052 (0.026110) | 0.468550 / 0.258489 (0.210061) | 0.523273 / 0.293841 (0.229432) | 0.048728 / 0.128546 (-0.079818) | 0.012991 / 0.075646 (-0.062655) | 0.087964 / 0.419271 (-0.331308) | 0.058920 / 0.043533 (0.015387) | 0.451247 / 0.255139 (0.196108) | 0.489827 / 0.283200 (0.206628) | 0.031164 / 0.141683 (-0.110519) | 1.675504 / 1.452155 (0.223349) | 1.806098 / 1.492716 (0.313382) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.253567 / 0.018006 (0.235561) | 0.508971 / 0.000490 (0.508481) | 0.010882 / 0.000200 (0.010682) | 0.000111 / 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029490 / 0.037411 (-0.007921) | 0.090255 / 0.014526 (0.075729) | 0.110075 / 0.176557 (-0.066482) | 0.159375 / 0.737135 (-0.577760) | 0.109313 / 0.296338 (-0.187025) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.580252 / 0.215209 (0.365043) | 5.911741 / 2.077655 (3.834086) | 2.659405 / 1.504120 (1.155285) | 2.344943 / 1.541195 (0.803749) | 2.390748 / 1.468490 (0.922258) | 0.827823 / 4.584777 (-3.756954) | 4.973544 / 3.745712 (1.227832) | 4.300220 / 5.269862 (-0.969642) | 2.826181 / 4.565676 (-1.739495) | 0.101013 / 0.424275 (-0.323263) | 0.008025 / 0.007607 (0.000418) | 0.728414 / 0.226044 (0.502369) | 7.508045 / 2.268929 (5.239117) | 3.687627 / 55.444624 (-51.756997) | 2.902953 / 6.876477 (-3.973524) | 3.094624 / 2.142072 (0.952551) | 1.054696 / 4.805227 (-3.750531) | 0.212297 / 6.500664 (-6.288367) | 0.070211 / 0.075469 (-0.005258) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.567117 / 1.841788 (-0.274670) | 21.420746 / 8.074308 (13.346438) | 19.857467 / 10.191392 (9.666075) | 0.228554 / 0.680424 (-0.451870) | 0.032278 / 0.534201 (-0.501923) | 0.459966 / 0.579283 (-0.119317) | 0.541219 / 0.434364 (0.106855) | 0.549599 / 0.540337 (0.009261) | 0.731476 / 1.386936 (-0.655460) |\n\n</details>\n</details>\n\n\n"
] | 2023-09-27T13:56:34Z
| 2023-09-28T18:34:02Z
| 2023-09-28T18:23:35Z
|
COLLABORATOR
| null | null | null |
... to avoid an `ImportError` raised in `BeamBasedBuilder._save_info` when `apache_beam` is not installed (e.g., when downloading the processed version of a dataset from the HF GCS)
Fix https://github.com/huggingface/datasets/issues/6260
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6265/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6265/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6265.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6265",
"merged_at": "2023-09-28T18:23:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6265.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6265"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4652
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4652/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4652/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4652/events
|
https://github.com/huggingface/datasets/issues/4652
| 1,296,697,498
|
I_kwDODunzps5NSgia
| 4,652
|
Add Sentence Compression Dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/omarespejel",
"id": 4755430,
"login": "omarespejel",
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"type": "User",
"url": "https://api.github.com/users/omarespejel",
"user_view_type": "public"
}
|
[
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] |
closed
| false
| null |
[] | null |
[
"uploaded dataset [here](https://huggingface.co/datasets/embedding-data/sentence-compression)."
] | 2022-07-07T02:13:46Z
| 2022-07-14T02:11:48Z
| 2022-07-14T02:11:48Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
## Adding a Dataset
- **Name:** *Sentence Compression*
- **Description:** *Large corpus of uncompressed and compressed sentences from news articles.*
- **Paper:** *https://www.aclweb.org/anthology/D13-1155/*
- **Data:** *https://github.com/google-research-datasets/sentence-compression/tree/master/data*
- **Motivation:** *Dataset for training and evaluating models of conversational response*
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/omarespejel",
"id": 4755430,
"login": "omarespejel",
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"type": "User",
"url": "https://api.github.com/users/omarespejel",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4652/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4652/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/4907
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4907/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4907/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4907/events
|
https://github.com/huggingface/datasets/issues/4907
| 1,353,808,348
|
I_kwDODunzps5QsXnc
| 4,907
|
None Type error for swda datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8229163?v=4",
"events_url": "https://api.github.com/users/hannan72/events{/privacy}",
"followers_url": "https://api.github.com/users/hannan72/followers",
"following_url": "https://api.github.com/users/hannan72/following{/other_user}",
"gists_url": "https://api.github.com/users/hannan72/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hannan72",
"id": 8229163,
"login": "hannan72",
"node_id": "MDQ6VXNlcjgyMjkxNjM=",
"organizations_url": "https://api.github.com/users/hannan72/orgs",
"received_events_url": "https://api.github.com/users/hannan72/received_events",
"repos_url": "https://api.github.com/users/hannan72/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hannan72/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hannan72/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hannan72",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[
"Thanks for reporting @hannan72 ! I couldn't reproduce the error on my side, can you share the full stack trace please ?",
"Thanks a lot for your response @lhoestq \r\nThe problem is solved accidentally today and I don't know exactly why it was happened yesterday.\r\nThe issue can be closed.",
"Ok, let us know if you encounter the issue again ;)"
] | 2022-08-29T07:05:20Z
| 2022-08-30T14:43:41Z
| 2022-08-30T14:43:41Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
## Describe the bug
I got `'NoneType' object is not callable` error while calling the swda datasets.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("swda")
```
## Expected results
Run without error
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Python version: 3.8.10
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4907/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4907/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6252
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6252/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6252/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6252/events
|
https://github.com/huggingface/datasets/issues/6252
| 1,906,375,378
|
I_kwDODunzps5xoPrS
| 6,252
|
exif_transpose not done to Image (PIL problem)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/108274349?v=4",
"events_url": "https://api.github.com/users/rhajou/events{/privacy}",
"followers_url": "https://api.github.com/users/rhajou/followers",
"following_url": "https://api.github.com/users/rhajou/following{/other_user}",
"gists_url": "https://api.github.com/users/rhajou/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rhajou",
"id": 108274349,
"login": "rhajou",
"node_id": "U_kgDOBnQirQ",
"organizations_url": "https://api.github.com/users/rhajou/orgs",
"received_events_url": "https://api.github.com/users/rhajou/received_events",
"repos_url": "https://api.github.com/users/rhajou/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rhajou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rhajou/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rhajou",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] |
{
"closed_at": null,
"closed_issues": 5,
"created_at": "2023-02-13T16:22:42Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
},
"description": "Next major release",
"due_on": null,
"html_url": "https://github.com/huggingface/datasets/milestone/10",
"id": 9038583,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/10/labels",
"node_id": "MI_kwDODunzps4Aier3",
"number": 10,
"open_issues": 3,
"state": "open",
"title": "3.0",
"updated_at": "2024-08-21T09:35:06Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/10"
}
|
[
"Indeed, it makes sense to do this by default. \r\n\r\nIn the meantime, you can use `.with_transform` to transpose the images when accessing them:\r\n\r\n```python\r\nimport PIL.ImageOps\r\n\r\ndef exif_transpose_transform(batch):\r\n batch[\"image\"] = [PIL.ImageOps.exif_transpose(image) for image in batch[\"image\"]]\r\n return batch\r\n\r\ndataset = dataset.with_transform(exif_transpose_transform)\r\n```",
"This operation sets some `Image` attributes to `None` (`.format`, `.filename`, etc.), causing our tests to fail, so I think we should wait for Datasets 3.0 to make this change. In version 3.0, storing image paths will be replaced by embedding image bytes, so there will be fewer instances where we use the `.filename` attribute."
] | 2023-09-21T08:11:46Z
| 2024-03-19T15:29:43Z
| 2024-03-19T15:29:43Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
I noticed that some of my images loaded using PIL have some metadata related to exif that can rotate them when loading.
Since the dataset.features.Image uses PIL for loading, the loaded image may be rotated (width and height will be inverted) thus for tasks as object detection and layoutLM this can create some inconsistencies (between input bboxes and input images).
For now there is no option in datasets.features.Image to specify that. We need to do the following when preparing examples (when preparing images for training, test or inference):
```
from PIL import Image, ImageOps
pil = ImageOps.exif_transpose(pil)
```
reference: https://stackoverflow.com/a/63950647/5720150
Is it possible to add this by default to the datasets.feature.Image ? or to add the option to do the ImageOps.exif_transpose?
Thank you
### Motivation
Prevent having inverted data related to exif metadata that may affect object detection tasks
### Your contribution
Changing in datasets.featrues.Image I can help with that.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6252/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6252/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5112
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5112/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5112/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5112/events
|
https://github.com/huggingface/datasets/issues/5112
| 1,409,143,409
|
I_kwDODunzps5T_dJx
| 5,112
|
Bug with filtered indices
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null |
[
"The issue is here:\r\nhttps://github.com/huggingface/datasets/blob/3ad9644b9a2e4558dd1d0f1e43c67658674e6228/src/datasets/arrow_dataset.py#L2964",
"@PartiallyTyped, @Muennighoff: the issue is fixed.\r\n\r\nWe are planning to make a patch release today.",
"Thanks a lot for the swift response! For a brief moment yesterday I thought I had gone insane 🤣On 14 Oct 2022, at 15:44, Albert Villanova del Moral ***@***.***> wrote:\n@PartiallyTyped, @Muennighoff: the issue is fixed.\nWe are planning to make a patch release today.\n\n—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you were mentioned.Message ID: ***@***.***>"
] | 2022-10-14T10:35:47Z
| 2022-10-14T13:55:03Z
| 2022-10-14T12:11:45Z
|
MEMBER
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
## Describe the bug
As reported by @PartiallyTyped (and by @Muennighoff):
- https://github.com/huggingface/datasets/issues/5111#issuecomment-1278652524
There is an issue with the indices of a filtered dataset.
## Steps to reproduce the bug
```python
ds = Dataset.from_dict({"num": [0, 1, 2, 3]})
ds = ds.filter(lambda num: num % 2 == 0, input_columns="num", batch_size=2)
assert all(item["num"] % 2 == 0 for item in ds)
```
## Expected results
The indices of the filtered dataset should correspond to the examples with "language" equals to "english".
## Actual results
Indices to items with other languages are included in the filtered dataset indices
## Preliminar investigation
It seems a bug introduced by:
- #5030
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5112/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5112/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5325
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5325/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5325/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5325/events
|
https://github.com/huggingface/datasets/issues/5325
| 1,471,536,822
|
I_kwDODunzps5Xtd62
| 5,325
|
map(...batch_size=None) for IterableDataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/299380?v=4",
"events_url": "https://api.github.com/users/frankier/events{/privacy}",
"followers_url": "https://api.github.com/users/frankier/followers",
"following_url": "https://api.github.com/users/frankier/following{/other_user}",
"gists_url": "https://api.github.com/users/frankier/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/frankier",
"id": 299380,
"login": "frankier",
"node_id": "MDQ6VXNlcjI5OTM4MA==",
"organizations_url": "https://api.github.com/users/frankier/orgs",
"received_events_url": "https://api.github.com/users/frankier/received_events",
"repos_url": "https://api.github.com/users/frankier/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/frankier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/frankier/subscriptions",
"type": "User",
"url": "https://api.github.com/users/frankier",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt",
"user_view_type": "public"
}
] | null |
[
"Hi! I agree it makes sense for `IterableDataset.map` to support the `batch_size=None` case. This should be super easy to fix.",
"@mariosasko as this is something simple maybe I can include it as part of https://github.com/huggingface/datasets/pull/5311? Let me know :+1:",
"#self-assign",
"Feel free to close this @lhoestq as part of https://github.com/huggingface/datasets/pull/5336 :hugs:",
"Thanks again :)\r\n\r\n> For practical usages, an alternative to this would be to convert from an iterable dataset to a map-style dataset, but it is not obvious how to do this.\r\n\r\nThis is interesting as well, if anyone wants to explore"
] | 2022-12-01T15:43:42Z
| 2022-12-07T15:54:43Z
| 2022-12-07T15:54:42Z
|
CONTRIBUTOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
Dataset.map(...) allows batch_size to be None. It would be nice if IterableDataset did too.
### Motivation
Although it may seem a bit of a spurious request given that `IterableDataset` is meant for larger than memory datasets, but there are a couple of reasons why this might be nice.
One is that load_dataset(...) can return either IterableDataset or Dataset. mypy will then complain if batch_size=None even if we know it is Dataset. Of course we can do:
assert isinstance(d, datasets.DatasetDict)
But it is a mild inconvenience. What's more annoying is that whenever we use something like e.g. `combine_datasets(...)`, we end up with the union again, and so have to do the assert again.
Another is that we could actually end up with an IterableDataset small enough for memory in normal/correct usage, e.g. by filtering a massive IterableDataset.
For practical usages, an alternative to this would be to convert from an iterable dataset to a map-style dataset, but it is not obvious how to do this.
### Your contribution
Not this time.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5325/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5325/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5749
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5749/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5749/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5749/events
|
https://github.com/huggingface/datasets/issues/5749
| 1,668,016,321
|
I_kwDODunzps5ja-jB
| 5,749
|
AttributeError: 'Version' object has no attribute 'match'
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/54584290?v=4",
"events_url": "https://api.github.com/users/gulnaz-zh/events{/privacy}",
"followers_url": "https://api.github.com/users/gulnaz-zh/followers",
"following_url": "https://api.github.com/users/gulnaz-zh/following{/other_user}",
"gists_url": "https://api.github.com/users/gulnaz-zh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gulnaz-zh",
"id": 54584290,
"login": "gulnaz-zh",
"node_id": "MDQ6VXNlcjU0NTg0Mjkw",
"organizations_url": "https://api.github.com/users/gulnaz-zh/orgs",
"received_events_url": "https://api.github.com/users/gulnaz-zh/received_events",
"repos_url": "https://api.github.com/users/gulnaz-zh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gulnaz-zh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gulnaz-zh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gulnaz-zh",
"user_view_type": "public"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null |
[
"I got the same error, and the official website for visual genome is down. Did you solve this problem? ",
"I am in the same situation now :( ",
"Thanks for reporting, @gulnaz-zh.\r\n\r\nI am investigating it.",
"The host server is down: https://visualgenome.org/\r\n\r\nWe are contacting the dataset authors.",
"Apart form data host server being down, there is an additional issue with the `datasets` library introduced by this PR:\r\n- #5238\r\n\r\nI am working to fix it.",
"PR that fixes the AttributeError: https://huggingface.co/datasets/visual_genome/discussions/2",
"For the issue with their data host server being down, I have opened a discussion in the \"Community\" tab of the Hub dataset: https://huggingface.co/datasets/visual_genome/discussions/3\r\nLet's continue the discussion there.",
"The authors just replied to us with their new URL: https://homes.cs.washington.edu/~ranjay/visualgenome/\r\n\r\nWe have fixed the datasets loading script, which is operative again."
] | 2023-04-14T10:48:06Z
| 2023-06-30T11:31:17Z
| 2023-04-18T12:57:08Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
When I run
from datasets import load_dataset
data = load_dataset("visual_genome", 'region_descriptions_v1.2.0')
AttributeError: 'Version' object has no attribute 'match'
### Steps to reproduce the bug
from datasets import load_dataset
data = load_dataset("visual_genome", 'region_descriptions_v1.2.0')
### Expected behavior
This is error trace:
Downloading and preparing dataset visual_genome/region_descriptions_v1.2.0 to C:/Users/Acer/.cache/huggingface/datasets/visual_genome/region_descriptions_v1.2.0/1.2.0/136fe5b83f6691884566c5530313288171e053a3b33bfe3ea2e4c8b39abaf7f3...
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[6], line 1
----> 1 data = load_dataset("visual_genome", 'region_descriptions_v1.2.0')
File ~\.conda\envs\aai\Lib\site-packages\datasets\load.py:1791, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)
1788 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES
1790 # Download and prepare data
-> 1791 builder_instance.download_and_prepare(
1792 download_config=download_config,
1793 download_mode=download_mode,
1794 verification_mode=verification_mode,
1795 try_from_hf_gcs=try_from_hf_gcs,
1796 num_proc=num_proc,
1797 storage_options=storage_options,
1798 )
1800 # Build dataset for splits
1801 keep_in_memory = (
1802 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
1803 )
File ~\.conda\envs\aai\Lib\site-packages\datasets\builder.py:891, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
889 if num_proc is not None:
890 prepare_split_kwargs["num_proc"] = num_proc
--> 891 self._download_and_prepare(
892 dl_manager=dl_manager,
893 verification_mode=verification_mode,
894 **prepare_split_kwargs,
895 **download_and_prepare_kwargs,
896 )
897 # Sync info
898 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File ~\.conda\envs\aai\Lib\site-packages\datasets\builder.py:1651, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs)
1650 def _download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs):
-> 1651 super()._download_and_prepare(
1652 dl_manager,
1653 verification_mode,
1654 check_duplicate_keys=verification_mode == VerificationMode.BASIC_CHECKS
1655 or verification_mode == VerificationMode.ALL_CHECKS,
1656 **prepare_splits_kwargs,
1657 )
File ~\.conda\envs\aai\Lib\site-packages\datasets\builder.py:964, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
962 split_dict = SplitDict(dataset_name=self.name)
963 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 964 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
966 # Checksums verification
967 if verification_mode == VerificationMode.ALL_CHECKS and dl_manager.record_checksums:
File ~\.cache\huggingface\modules\datasets_modules\datasets\visual_genome\136fe5b83f6691884566c5530313288171e053a3b33bfe3ea2e4c8b39abaf7f3\visual_genome.py:377, in VisualGenome._split_generators(self, dl_manager)
375 def _split_generators(self, dl_manager):
376 # Download image meta datas.
--> 377 image_metadatas_dir = dl_manager.download_and_extract(self.config.image_metadata_url)
378 image_metadatas_file = os.path.join(
379 image_metadatas_dir, _get_decompressed_filename_from_url(self.config.image_metadata_url)
380 )
382 # Download annotations
File ~\.cache\huggingface\modules\datasets_modules\datasets\visual_genome\136fe5b83f6691884566c5530313288171e053a3b33bfe3ea2e4c8b39abaf7f3\visual_genome.py:328, in VisualGenomeConfig.image_metadata_url(self)
326 @property
327 def image_metadata_url(self):
--> 328 if not self.version.match(_LATEST_VERSIONS["image_metadata"]):
329 logger.warning(
330 f"Latest image metadata version is {_LATEST_VERSIONS['image_metadata']}. Trying to generate a dataset of version: {self.version}. Please double check that image data are unchanged between the two versions."
331 )
332 return f"{_BASE_ANNOTATION_URL}/image_data.json.zip"
### Environment info
datasets 2.11.0
python 3.11.3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5749/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5749/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/7293
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7293/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7293/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7293/events
|
https://github.com/huggingface/datasets/pull/7293
| 2,664,592,054
|
PR_kwDODunzps6CIjS-
| 7,293
|
Updated inconsistent output in documentation examples for `ClassLabel`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17179696?v=4",
"events_url": "https://api.github.com/users/sergiopaniego/events{/privacy}",
"followers_url": "https://api.github.com/users/sergiopaniego/followers",
"following_url": "https://api.github.com/users/sergiopaniego/following{/other_user}",
"gists_url": "https://api.github.com/users/sergiopaniego/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sergiopaniego",
"id": 17179696,
"login": "sergiopaniego",
"node_id": "MDQ6VXNlcjE3MTc5Njk2",
"organizations_url": "https://api.github.com/users/sergiopaniego/orgs",
"received_events_url": "https://api.github.com/users/sergiopaniego/received_events",
"repos_url": "https://api.github.com/users/sergiopaniego/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sergiopaniego/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sergiopaniego/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sergiopaniego",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Updated! 😄 ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7293). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@lhoestq, can you help with this failing test please? 🙏 "
] | 2024-11-16T16:20:57Z
| 2024-12-06T11:33:33Z
| 2024-12-06T11:32:01Z
|
MEMBER
| null | null | null |
fix #7129
@stevhliu
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7293/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7293/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7293.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7293",
"merged_at": "2024-12-06T11:32:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7293.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7293"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4719
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4719/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4719/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4719/events
|
https://github.com/huggingface/datasets/issues/4719
| 1,309,854,492
|
I_kwDODunzps5OEssc
| 4,719
|
Issue loading TheNoob3131/mosquito-data dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/53668030?v=4",
"events_url": "https://api.github.com/users/thenerd31/events{/privacy}",
"followers_url": "https://api.github.com/users/thenerd31/followers",
"following_url": "https://api.github.com/users/thenerd31/following{/other_user}",
"gists_url": "https://api.github.com/users/thenerd31/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thenerd31",
"id": 53668030,
"login": "thenerd31",
"node_id": "MDQ6VXNlcjUzNjY4MDMw",
"organizations_url": "https://api.github.com/users/thenerd31/orgs",
"received_events_url": "https://api.github.com/users/thenerd31/received_events",
"repos_url": "https://api.github.com/users/thenerd31/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thenerd31/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thenerd31/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thenerd31",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"I am also getting a ValueError: 'Couldn't cast' at the bottom. Is this because of some delimiter issue? My dataset is on the Huggingface Hub. If you could look at it, that would be greatly appreciated.",
"Hi @thenerd31, thanks for reporting.\r\n\r\nPlease note that your issue is not caused by the Hugging Face Datasets library, but it has to do with the specific implementation of your dataset on the Hub.\r\n\r\nTherefore, I'm transferring this discussion to your own dataset Community tab: https://huggingface.co/datasets/TheNoob3131/mosquito-data/discussions/1"
] | 2022-07-19T17:47:37Z
| 2022-07-20T06:46:57Z
| 2022-07-20T06:46:02Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|

So my dataset is public in the Huggingface Hub, but when I try to load it using the load_dataset command, it shows that it is downloading the files, but throws a ValueError. When I went to my directory to see if the files were downloaded, the folder was blank.
Here is the error below:
ValueError Traceback (most recent call last)
Input In [8], in <cell line: 3>()
1 from datasets import load_dataset
----> 3 dataset = load_dataset("TheNoob3131/mosquito-data", split="train")
File ~\Anaconda3\lib\site-packages\datasets\load.py:1679, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1676 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES
1678 # Download and prepare data
-> 1679 builder_instance.download_and_prepare(
1680 download_config=download_config,
1681 download_mode=download_mode,
1682 ignore_verifications=ignore_verifications,
1683 try_from_hf_gcs=try_from_hf_gcs,
1684 use_auth_token=use_auth_token,
1685 )
1687 # Build dataset for splits
1688 keep_in_memory = (
1689 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
1690 )
Is the dataset in the wrong format or is there some security permission that I should enable?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4719/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4719/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5671
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5671/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5671/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5671/events
|
https://github.com/huggingface/datasets/issues/5671
| 1,640,840,012
|
I_kwDODunzps5hzTtM
| 5,671
|
How to use `load_dataset('glue', 'cola')`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/40193664?v=4",
"events_url": "https://api.github.com/users/makinzm/events{/privacy}",
"followers_url": "https://api.github.com/users/makinzm/followers",
"following_url": "https://api.github.com/users/makinzm/following{/other_user}",
"gists_url": "https://api.github.com/users/makinzm/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/makinzm",
"id": 40193664,
"login": "makinzm",
"node_id": "MDQ6VXNlcjQwMTkzNjY0",
"organizations_url": "https://api.github.com/users/makinzm/orgs",
"received_events_url": "https://api.github.com/users/makinzm/received_events",
"repos_url": "https://api.github.com/users/makinzm/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/makinzm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/makinzm/subscriptions",
"type": "User",
"url": "https://api.github.com/users/makinzm",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Sounds like an issue with incompatible `transformers` dependencies versions.\r\n\r\nCan you try to update `transformers` ?\r\n\r\nEDIT: I checked the `transformers` dependencies and it seems like you need `tokenizers>=0.10.1,<0.11` with `transformers==4.5.1`\r\n\r\nEDIT2: this old version of `datasets` seems to import `transformers` but it's no longer the case, so you could also simply update `datasets` and `transformers` won't be imported",
"Thank you for advising me to update these libraries versions.\r\n\r\nI can implement codes using `datasets==2.10.1` and `transformers==4.27.3`"
] | 2023-03-26T09:40:34Z
| 2023-03-28T07:43:44Z
| 2023-03-28T07:43:43Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I'm new to use HuggingFace datasets but I cannot use `load_dataset('glue', 'cola')`.
- I was stacked by the following problem:
```python
from datasets import load_dataset
cola_dataset = load_dataset('glue', 'cola')
---------------------------------------------------------------------------
InvalidVersion Traceback (most recent call last)
File <timed exec>:1
(Omit because of long error message)
File /usr/local/lib/python3.8/site-packages/packaging/version.py:197, in Version.__init__(self, version)
195 match = self._regex.search(version)
196 if not match:
--> 197 raise InvalidVersion(f"Invalid version: '{version}'")
199 # Store the parsed out pieces of the version
200 self._version = _Version(
201 epoch=int(match.group("epoch")) if match.group("epoch") else 0,
202 release=tuple(int(i) for i in match.group("release").split(".")),
(...)
208 local=_parse_local_version(match.group("local")),
209 )
InvalidVersion: Invalid version: '0.10.1,<0.11'
```
- You can check this full error message in my repository: [MLOps-Basics/week_0_project_setup/experimental_notebooks/data_exploration.ipynb](https://github.com/makinzm/MLOps-Basics/blob/eabab4b837880607d9968d3fa687c70177b2affd/week_0_project_setup/experimental_notebooks/data_exploration.ipynb)
### Steps to reproduce the bug
- This is my repository to reproduce: [MLOps-Basics/week_0_project_setup](https://github.com/makinzm/MLOps-Basics/tree/eabab4b837880607d9968d3fa687c70177b2affd/week_0_project_setup)
1. cd `/DockerImage` and command `docker build . -t week0`
2. cd `/` and command `docker-compose up`
3. Run `experimental_notebooks/data_exploration.ipynb`
----
Just to be sure, I wrote down Dockerfile and requirements.txt
- Dockerfile
```Dockerfile
FROM python:3.8
WORKDIR /root/working
RUN apt-get update && \
apt-get install -y python3-dev python3-pip python3-venv && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
COPY requirements.txt .
RUN pip3 install --no-cache-dir jupyter notebook && pip install --no-cache-dir -r requirements.txt
CMD ["bash"]
```
- requirements.txt
```txt
pytorch-lightning==1.2.10
datasets==1.6.2
transformers==4.5.1
scikit-learn==0.24.2
```
### Expected behavior
There is no bug to implement `load_dataset('glue', 'cola')`
### Environment info
I already wrote it.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/40193664?v=4",
"events_url": "https://api.github.com/users/makinzm/events{/privacy}",
"followers_url": "https://api.github.com/users/makinzm/followers",
"following_url": "https://api.github.com/users/makinzm/following{/other_user}",
"gists_url": "https://api.github.com/users/makinzm/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/makinzm",
"id": 40193664,
"login": "makinzm",
"node_id": "MDQ6VXNlcjQwMTkzNjY0",
"organizations_url": "https://api.github.com/users/makinzm/orgs",
"received_events_url": "https://api.github.com/users/makinzm/received_events",
"repos_url": "https://api.github.com/users/makinzm/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/makinzm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/makinzm/subscriptions",
"type": "User",
"url": "https://api.github.com/users/makinzm",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5671/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5671/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5211
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5211/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5211/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5211/events
|
https://github.com/huggingface/datasets/pull/5211
| 1,438,544,617
|
PR_kwDODunzps5CVgBx
| 5,211
|
Update Overview.ipynb google colab
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"WDYT @albertvillanova ?",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5211). All of your documentation changes will be reflected on that endpoint."
] | 2022-11-07T15:23:52Z
| 2022-11-29T15:59:48Z
| 2022-11-29T15:54:17Z
|
MEMBER
| null | null | null |
- removed metrics stuff
- added image example
- added audio example (with ffmpeg instructions)
- updated the "add a new dataset" section
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5211/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5211/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5211.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5211",
"merged_at": "2022-11-29T15:54:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5211.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5211"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4958
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4958/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4958/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4958/events
|
https://github.com/huggingface/datasets/issues/4958
| 1,367,695,376
|
I_kwDODunzps5RhWAQ
| 4,958
|
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.4.0/datasets/jsonl/jsonl.py
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/66322047?v=4",
"events_url": "https://api.github.com/users/hasakikiki/events{/privacy}",
"followers_url": "https://api.github.com/users/hasakikiki/followers",
"following_url": "https://api.github.com/users/hasakikiki/following{/other_user}",
"gists_url": "https://api.github.com/users/hasakikiki/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hasakikiki",
"id": 66322047,
"login": "hasakikiki",
"node_id": "MDQ6VXNlcjY2MzIyMDQ3",
"organizations_url": "https://api.github.com/users/hasakikiki/orgs",
"received_events_url": "https://api.github.com/users/hasakikiki/received_events",
"repos_url": "https://api.github.com/users/hasakikiki/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hasakikiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hasakikiki/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hasakikiki",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[
"I have solved this problem... The extension of the file should be `.json` not `.jsonl`"
] | 2022-09-09T11:29:55Z
| 2022-09-09T11:38:44Z
| 2022-09-09T11:38:44Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
Hi,
When I use load_dataset from local jsonl files, below error happens, and I type the link into the browser prompting me `404: Not Found`. I download the other `.py` files using the same method and it works. It seems that the server is missing the appropriate file, or it is a problem with the code version.
```
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.3.0/datasets/jsonl/jsonl.py (ConnectionError(MaxRetryError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.3.0/datasets/jsonl/jsonl.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x2b08342004c0>: Failed to establish a new connection: [Errno 101] Network is unreachable'))")))
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/66322047?v=4",
"events_url": "https://api.github.com/users/hasakikiki/events{/privacy}",
"followers_url": "https://api.github.com/users/hasakikiki/followers",
"following_url": "https://api.github.com/users/hasakikiki/following{/other_user}",
"gists_url": "https://api.github.com/users/hasakikiki/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hasakikiki",
"id": 66322047,
"login": "hasakikiki",
"node_id": "MDQ6VXNlcjY2MzIyMDQ3",
"organizations_url": "https://api.github.com/users/hasakikiki/orgs",
"received_events_url": "https://api.github.com/users/hasakikiki/received_events",
"repos_url": "https://api.github.com/users/hasakikiki/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hasakikiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hasakikiki/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hasakikiki",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4958/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4958/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5611
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5611/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5611/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5611/events
|
https://github.com/huggingface/datasets/pull/5611
| 1,611,197,906
|
PR_kwDODunzps5LW2Lx
| 5,611
|
add Dataset.to_list
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/50972773?v=4",
"events_url": "https://api.github.com/users/kyoto7250/events{/privacy}",
"followers_url": "https://api.github.com/users/kyoto7250/followers",
"following_url": "https://api.github.com/users/kyoto7250/following{/other_user}",
"gists_url": "https://api.github.com/users/kyoto7250/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kyoto7250",
"id": 50972773,
"login": "kyoto7250",
"node_id": "MDQ6VXNlcjUwOTcyNzcz",
"organizations_url": "https://api.github.com/users/kyoto7250/orgs",
"received_events_url": "https://api.github.com/users/kyoto7250/received_events",
"repos_url": "https://api.github.com/users/kyoto7250/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kyoto7250/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kyoto7250/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kyoto7250",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi, thanks for working on this! `Table.to_pylist` requires PyArrow 7.0+, and our minimal version requirement is 6.0, so we need to bump the version requirement to avoid CI failure. I'll do this in a separate PR.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006857 / 0.011353 (-0.004496) | 0.004711 / 0.011008 (-0.006297) | 0.098332 / 0.038508 (0.059824) | 0.028547 / 0.023109 (0.005438) | 0.307647 / 0.275898 (0.031749) | 0.334891 / 0.323480 (0.011411) | 0.005252 / 0.007986 (-0.002734) | 0.003495 / 0.004328 (-0.000833) | 0.075529 / 0.004250 (0.071279) | 0.042167 / 0.037052 (0.005114) | 0.308509 / 0.258489 (0.050020) | 0.348294 / 0.293841 (0.054453) | 0.032042 / 0.128546 (-0.096504) | 0.011684 / 0.075646 (-0.063962) | 0.321740 / 0.419271 (-0.097531) | 0.057725 / 0.043533 (0.014193) | 0.309431 / 0.255139 (0.054292) | 0.326818 / 0.283200 (0.043618) | 0.093261 / 0.141683 (-0.048422) | 1.475344 / 1.452155 (0.023190) | 1.563952 / 1.492716 (0.071236) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.205056 / 0.018006 (0.187050) | 0.421656 / 0.000490 (0.421166) | 0.004167 / 0.000200 (0.003967) | 0.000075 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023935 / 0.037411 (-0.013476) | 0.097220 / 0.014526 (0.082695) | 0.104942 / 0.176557 (-0.071615) | 0.170339 / 0.737135 (-0.566796) | 0.107556 / 0.296338 (-0.188782) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.424509 / 0.215209 (0.209300) | 4.223637 / 2.077655 (2.145982) | 2.090700 / 1.504120 (0.586580) | 1.902537 / 1.541195 (0.361343) | 1.981192 / 1.468490 (0.512701) | 0.695272 / 4.584777 (-3.889505) | 3.570169 / 3.745712 (-0.175544) | 1.885007 / 5.269862 (-3.384854) | 1.162828 / 4.565676 (-3.402848) | 0.084956 / 0.424275 (-0.339319) | 0.012818 / 0.007607 (0.005210) | 0.534395 / 0.226044 (0.308351) | 5.354318 / 2.268929 (3.085389) | 2.436875 / 55.444624 (-53.007749) | 2.111365 / 6.876477 (-4.765112) | 2.232874 / 2.142072 (0.090802) | 0.804703 / 4.805227 (-4.000524) | 0.152406 / 6.500664 (-6.348258) | 0.066926 / 0.075469 (-0.008543) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.198621 / 1.841788 (-0.643166) | 13.907491 / 8.074308 (5.833183) | 14.356286 / 10.191392 (4.164894) | 0.140714 / 0.680424 (-0.539710) | 0.016440 / 0.534201 (-0.517761) | 0.380868 / 0.579283 (-0.198415) | 0.396004 / 0.434364 (-0.038360) | 0.448275 / 0.540337 (-0.092062) | 0.537818 / 1.386936 (-0.849118) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006789 / 0.011353 (-0.004564) | 0.004652 / 0.011008 (-0.006356) | 0.076449 / 0.038508 (0.037941) | 0.028389 / 0.023109 (0.005280) | 0.378644 / 0.275898 (0.102746) | 0.423870 / 0.323480 (0.100391) | 0.005824 / 0.007986 (-0.002162) | 0.003398 / 0.004328 (-0.000931) | 0.075575 / 0.004250 (0.071324) | 0.039656 / 0.037052 (0.002604) | 0.370072 / 0.258489 (0.111583) | 0.441812 / 0.293841 (0.147971) | 0.031817 / 0.128546 (-0.096729) | 0.011701 / 0.075646 (-0.063946) | 0.085759 / 0.419271 (-0.333513) | 0.042328 / 0.043533 (-0.001205) | 0.364103 / 0.255139 (0.108964) | 0.413910 / 0.283200 (0.130711) | 0.090871 / 0.141683 (-0.050812) | 1.505749 / 1.452155 (0.053594) | 1.608555 / 1.492716 (0.115839) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212533 / 0.018006 (0.194527) | 0.404519 / 0.000490 (0.404030) | 0.000373 / 0.000200 (0.000174) | 0.000059 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024849 / 0.037411 (-0.012562) | 0.100769 / 0.014526 (0.086243) | 0.110450 / 0.176557 (-0.066107) | 0.161715 / 0.737135 (-0.575420) | 0.113599 / 0.296338 (-0.182739) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436780 / 0.215209 (0.221571) | 4.387103 / 2.077655 (2.309448) | 2.081942 / 1.504120 (0.577822) | 1.873661 / 1.541195 (0.332466) | 1.947718 / 1.468490 (0.479228) | 0.696434 / 4.584777 (-3.888343) | 3.405300 / 3.745712 (-0.340412) | 1.897388 / 5.269862 (-3.372474) | 1.169969 / 4.565676 (-3.395707) | 0.083085 / 0.424275 (-0.341190) | 0.012480 / 0.007607 (0.004873) | 0.535635 / 0.226044 (0.309591) | 5.364462 / 2.268929 (3.095533) | 2.531168 / 55.444624 (-52.913457) | 2.184324 / 6.876477 (-4.692153) | 2.228613 / 2.142072 (0.086541) | 0.807127 / 4.805227 (-3.998100) | 0.151971 / 6.500664 (-6.348693) | 0.068430 / 0.075469 (-0.007039) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.306401 / 1.841788 (-0.535387) | 14.479552 / 8.074308 (6.405244) | 14.428398 / 10.191392 (4.237006) | 0.159505 / 0.680424 (-0.520919) | 0.016856 / 0.534201 (-0.517344) | 0.375197 / 0.579283 (-0.204086) | 0.384328 / 0.434364 (-0.050036) | 0.440688 / 0.540337 (-0.099650) | 0.524998 / 1.386936 (-0.861938) |\n\n</details>\n</details>\n\n\n"
] | 2023-03-06T11:21:57Z
| 2023-03-27T13:34:19Z
| 2023-03-27T13:26:38Z
|
CONTRIBUTOR
| null | null | null |
close https://github.com/huggingface/datasets/issues/5606
This PR is for adding the `Dataset.to_list` method.
Thank you in advance.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5611/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5611/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5611.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5611",
"merged_at": "2023-03-27T13:26:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5611.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5611"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6317
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6317/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6317/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6317/events
|
https://github.com/huggingface/datasets/issues/6317
| 1,951,965,668
|
I_kwDODunzps50WKHk
| 6,317
|
sentiment140 dataset unavailable
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/52670382?v=4",
"events_url": "https://api.github.com/users/AndreasKarasenko/events{/privacy}",
"followers_url": "https://api.github.com/users/AndreasKarasenko/followers",
"following_url": "https://api.github.com/users/AndreasKarasenko/following{/other_user}",
"gists_url": "https://api.github.com/users/AndreasKarasenko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AndreasKarasenko",
"id": 52670382,
"login": "AndreasKarasenko",
"node_id": "MDQ6VXNlcjUyNjcwMzgy",
"organizations_url": "https://api.github.com/users/AndreasKarasenko/orgs",
"received_events_url": "https://api.github.com/users/AndreasKarasenko/received_events",
"repos_url": "https://api.github.com/users/AndreasKarasenko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AndreasKarasenko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AndreasKarasenko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AndreasKarasenko",
"user_view_type": "public"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null |
[
"Thanks for reporting. We are investigating the issue.",
"We have opened an issue in the corresponding Hub dataset: https://huggingface.co/datasets/sentiment140/discussions/3\r\n\r\nLet's continue the discussion there."
] | 2023-10-19T11:25:21Z
| 2023-10-19T13:04:56Z
| 2023-10-19T13:04:56Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
loading the dataset using load_dataset("sentiment140") returns the following error
ConnectionError: Couldn't reach http://cs.stanford.edu/people/alecmgo/trainingandtestdata.zip (error 403)
### Steps to reproduce the bug
Run the following code (version should not matter).
```
from datasets import load_dataset
data = load_dataset("sentiment140")
```
### Expected behavior
The dataset should be loaded just like any other.
The main issue is that it is no longer hosted by stanford. It is still available from a [Google Drive Link](https://docs.google.com/file/d/0B04GJPshIjmPRnZManQwWEdTZjg/edit).
### Environment info
- `datasets` version: 2.14.5
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.10.8
- Huggingface_hub version: 0.17.3
- PyArrow version: 13.0.0
- Pandas version: 2.1.1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6317/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6317/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5245
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5245/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5245/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5245/events
|
https://github.com/huggingface/datasets/issues/5245
| 1,450,376,433
|
I_kwDODunzps5Wcvzx
| 5,245
|
Unable to rename columns in streaming dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/9079808?v=4",
"events_url": "https://api.github.com/users/peregilk/events{/privacy}",
"followers_url": "https://api.github.com/users/peregilk/followers",
"following_url": "https://api.github.com/users/peregilk/following{/other_user}",
"gists_url": "https://api.github.com/users/peregilk/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/peregilk",
"id": 9079808,
"login": "peregilk",
"node_id": "MDQ6VXNlcjkwNzk4MDg=",
"organizations_url": "https://api.github.com/users/peregilk/orgs",
"received_events_url": "https://api.github.com/users/peregilk/received_events",
"repos_url": "https://api.github.com/users/peregilk/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/peregilk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/peregilk/subscriptions",
"type": "User",
"url": "https://api.github.com/users/peregilk",
"user_view_type": "public"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt",
"user_view_type": "public"
}
] | null |
[
"Hi @peregilk this bug is directly related to https://github.com/huggingface/datasets/issues/3888, and still not fixed... But I'll try to have a look!",
"Thanks @alvarobartt. It is great if you are able to fix it, but when reading the explanation it seems like it is possible to work around it.\r\n\r\nWe also tried keeping the 'info.features' and then adding a modified version back after the remove/rename. Unforutunately that leads to a dataset that is not possible to iterate over.",
"So if you iterate over the `IterableDataset` as `next(iter(ds))` and then run `rename_columns` when checking that data it will work, but in the end, it's just renaming the column one example/batch at a time, not renaming the column name for all the entries in the dataset, which is the ideal.",
"@alvarobartt Thanks. My use case was that I wanted to do multiple things, ie removing all unnecessary columns, renaming some valid columns, and then using cast (in my case checking if the audio is not 16K and casting it). It is just convenient to look into the info.features between each of these operations. Alternatively, I will just plan ahead...;) To me it seems like all the operations are working.\r\n\r\nThanks for the advice. It was very useful.",
"If we know the features before renaming, then we know the features after renaming, so we can pass the new features to the returned dataset in `rename_column` indeed ! If anyone is interested in contributing, feel free to open a PR and I'd be happy to help / give some pointers :)",
"Sure @lhoestq thanks! I’ll try to work on that",
"#self-assign"
] | 2022-11-15T21:04:41Z
| 2022-11-28T12:53:24Z
| 2022-11-28T12:53:24Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Trying to rename column in a streaming datasets, destroys the features object.
### Steps to reproduce the bug
The following code illustrates the error:
```
from datasets import load_dataset
dataset = load_dataset('mc4', 'en', streaming=True, split='train')
dataset.info.features
# {'text': Value(dtype='string', id=None), 'timestamp': Value(dtype='string', id=None), 'url': Value(dtype='string', id=None)}
dataset = dataset.rename_column("text", "content")
dataset.info.features
# This returned object is now None!
```
### Expected behavior
This should just alter the renamed column.
### Environment info
datasets 2.6.1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5245/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5245/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/7061
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7061/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7061/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7061/events
|
https://github.com/huggingface/datasets/issues/7061
| 2,423,786,881
|
I_kwDODunzps6QeA2B
| 7,061
|
Custom Dataset | Still Raise Error while handling errors in _generate_examples
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/68266028?v=4",
"events_url": "https://api.github.com/users/hahmad2008/events{/privacy}",
"followers_url": "https://api.github.com/users/hahmad2008/followers",
"following_url": "https://api.github.com/users/hahmad2008/following{/other_user}",
"gists_url": "https://api.github.com/users/hahmad2008/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hahmad2008",
"id": 68266028,
"login": "hahmad2008",
"node_id": "MDQ6VXNlcjY4MjY2MDI4",
"organizations_url": "https://api.github.com/users/hahmad2008/orgs",
"received_events_url": "https://api.github.com/users/hahmad2008/received_events",
"repos_url": "https://api.github.com/users/hahmad2008/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hahmad2008/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hahmad2008/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hahmad2008",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2024-07-22T21:18:12Z
| 2024-09-09T14:48:07Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I follow this [example](https://discuss.huggingface.co/t/error-handling-in-iterabledataset/72827/3) to handle errors in custom dataset. I am writing a dataset script which read jsonl files and i need to handle errors and continue reading files without raising exception and exit the execution.
```
def _generate_examples(self, filepaths):
errors=[]
id_ = 0
for filepath in filepaths:
try:
with open(filepath, 'r') as f:
for line in f:
json_obj = json.loads(line)
yield id_, json_obj
id_ += 1
except Exception as exc:
logger.error(f"error occur at filepath: {filepath}")
errors.append(error)
```
seems the logger.error is printed but still exception is raised the the run is exit.
```
Downloading and preparing dataset custom_dataset/default to /home/myuser/.cache/huggingface/datasets/custom_dataset/default-a14cdd566afee0a6/1.0.0/acfcc9fb9c57034b580c4252841
ERROR: datasets_modules.datasets.custom_dataset.acfcc9fb9c57034b580c4252841bb890a5617cbd28678dd4be5e52b81188ad02.custom_dataset: 2024-07-22 10:47:42,167: error occur at filepath: '/home/myuser/ds/corrupted-file.jsonl
Traceback (most recent call last):
File "/home/myuser/.cache/huggingface/modules/datasets_modules/datasets/custom_dataset/ac..2/custom_dataset.py", line 48, in _generate_examples
json_obj = json.loads(line)
File "myenv/lib/python3.8/json/__init__.py", line 357, in loads
return _default_decoder.decode(s)
File "myenv/lib/python3.8/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "myenv/lib/python3.8/json/decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Invalid control character at: line 1 column 4 (char 3)
Generating train split: 0 examples [00:06, ? examples/s]>
RemoteTraceback:
"""
Traceback (most recent call last):
File "myenv/lib/python3.8/site-packages/datasets/builder.py", line 1637, in _prepare_split_single
num_examples, num_bytes = writer.finalize()
File "myenv/lib/python3.8/site-packages/datasets/arrow_writer.py", line 594, in finalize
raise SchemaInferenceError("Please pass `features` or at least one example when writing data")
datasets.arrow_writer.SchemaInferenceError: Please pass `features` or at least one example when writing data
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "myenv/lib/python3.8/site-packages/multiprocess/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "myenv/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 1353, in
_write_generator_to_queue
for i, result in enumerate(func(**kwargs)):
File "myenv/lib/python3.8/site-packages/datasets/builder.py", line 1646, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
"""
The above exception was the direct cause of the following exception:
│ │
│ myenv/lib/python3.8/site-packages/datasets/utils/py_utils. │
│ py:1377 in <listcomp> │
│ │
│ 1374 │ │ │ │ if all(async_result.ready() for async_result in async_results) and queue │
│ 1375 │ │ │ │ │ break │
│ 1376 │ │ # we get the result in case there's an error to raise │
│ ❱ 1377 │ │ [async_result.get() for async_result in async_results] │
│ 1378 │
│ │
│ ╭──────────────────────────────── locals ─────────────────────────────────╮ │
│ │ .0 = <list_iterator object at 0x7f2cc1f0ce20> │ │
│ │ async_result = <multiprocess.pool.ApplyResult object at 0x7f2cc1f79c10> │ │
│ ╰─────────────────────────────────────────────────────────────────────────╯ │
│ │
│ myenv/lib/python3.8/site-packages/multiprocess/pool.py:771 │
│ in get │
│ │
│ 768 │ │ if self._success: │
│ 769 │ │ │ return self._value │
│ 770 │ │ else: │
│ ❱ 771 │ │ │ raise self._value │
│ 772 │ │
│ 773 │ def _set(self, i, obj): │
│ 774 │ │ self._success, self._value = obj │
│ │
│ ╭────────────────────────────── locals ──────────────────────────────╮ │
│ │ self = <multiprocess.pool.ApplyResult object at 0x7f2cc1f79c10> │ │
│ │ timeout = None │ │
│ ╰────────────────────────────────────────────────────────────────────╯ │
DatasetGenerationError: An error occurred while generating the dataset
```
### Steps to reproduce the bug
same as above
### Expected behavior
should handle error and continue reading remaining files
### Environment info
python 3.9
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7061/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7061/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6795
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6795/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6795/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6795/events
|
https://github.com/huggingface/datasets/pull/6795
| 2,233,618,719
|
PR_kwDODunzps5sJAC8
| 6,795
|
Add CLI function to convert script-dataset to Parquet
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6795). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@huggingface/datasets once this PR is merged, I would suggest making a release. Do you agree?",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005367 / 0.011353 (-0.005986) | 0.003161 / 0.011008 (-0.007847) | 0.063259 / 0.038508 (0.024751) | 0.030550 / 0.023109 (0.007441) | 0.243789 / 0.275898 (-0.032109) | 0.262474 / 0.323480 (-0.061006) | 0.003157 / 0.007986 (-0.004829) | 0.002586 / 0.004328 (-0.001742) | 0.049336 / 0.004250 (0.045085) | 0.046434 / 0.037052 (0.009382) | 0.249142 / 0.258489 (-0.009347) | 0.282953 / 0.293841 (-0.010888) | 0.027881 / 0.128546 (-0.100666) | 0.010069 / 0.075646 (-0.065578) | 0.207937 / 0.419271 (-0.211334) | 0.036005 / 0.043533 (-0.007528) | 0.251850 / 0.255139 (-0.003288) | 0.265156 / 0.283200 (-0.018044) | 0.019780 / 0.141683 (-0.121903) | 1.124301 / 1.452155 (-0.327853) | 1.177392 / 1.492716 (-0.315324) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091045 / 0.018006 (0.073039) | 0.301258 / 0.000490 (0.300769) | 0.000214 / 0.000200 (0.000014) | 0.000048 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018726 / 0.037411 (-0.018686) | 0.061623 / 0.014526 (0.047097) | 0.073905 / 0.176557 (-0.102651) | 0.119444 / 0.737135 (-0.617692) | 0.074614 / 0.296338 (-0.221725) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287313 / 0.215209 (0.072104) | 2.772864 / 2.077655 (0.695209) | 1.465267 / 1.504120 (-0.038853) | 1.343666 / 1.541195 (-0.197528) | 1.329390 / 1.468490 (-0.139100) | 0.570222 / 4.584777 (-4.014555) | 2.421835 / 3.745712 (-1.323877) | 2.747282 / 5.269862 (-2.522579) | 1.728733 / 4.565676 (-2.836943) | 0.063671 / 0.424275 (-0.360604) | 0.005343 / 0.007607 (-0.002264) | 0.335078 / 0.226044 (0.109033) | 3.334305 / 2.268929 (1.065376) | 1.779496 / 55.444624 (-53.665129) | 1.496475 / 6.876477 (-5.380002) | 1.507848 / 2.142072 (-0.634224) | 0.653653 / 4.805227 (-4.151575) | 0.118373 / 6.500664 (-6.382291) | 0.041727 / 0.075469 (-0.033742) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.981985 / 1.841788 (-0.859803) | 11.290978 / 8.074308 (3.216670) | 9.499217 / 10.191392 (-0.692175) | 0.131353 / 0.680424 (-0.549071) | 0.014416 / 0.534201 (-0.519785) | 0.288381 / 0.579283 (-0.290902) | 0.265483 / 0.434364 (-0.168880) | 0.323438 / 0.540337 (-0.216900) | 0.417946 / 1.386936 (-0.968990) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005272 / 0.011353 (-0.006081) | 0.003551 / 0.011008 (-0.007457) | 0.050173 / 0.038508 (0.011665) | 0.031291 / 0.023109 (0.008182) | 0.278658 / 0.275898 (0.002760) | 0.301812 / 0.323480 (-0.021668) | 0.004237 / 0.007986 (-0.003748) | 0.002713 / 0.004328 (-0.001615) | 0.049483 / 0.004250 (0.045233) | 0.039995 / 0.037052 (0.002943) | 0.293101 / 0.258489 (0.034612) | 0.319956 / 0.293841 (0.026116) | 0.029127 / 0.128546 (-0.099419) | 0.010247 / 0.075646 (-0.065400) | 0.057929 / 0.419271 (-0.361342) | 0.032942 / 0.043533 (-0.010591) | 0.281677 / 0.255139 (0.026538) | 0.297937 / 0.283200 (0.014737) | 0.018285 / 0.141683 (-0.123398) | 1.272858 / 1.452155 (-0.179297) | 1.213375 / 1.492716 (-0.279342) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091110 / 0.018006 (0.073104) | 0.302589 / 0.000490 (0.302099) | 0.000214 / 0.000200 (0.000014) | 0.000070 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021520 / 0.037411 (-0.015891) | 0.075013 / 0.014526 (0.060487) | 0.088695 / 0.176557 (-0.087862) | 0.128281 / 0.737135 (-0.608854) | 0.090611 / 0.296338 (-0.205727) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297457 / 0.215209 (0.082248) | 2.928612 / 2.077655 (0.850957) | 1.613245 / 1.504120 (0.109125) | 1.485263 / 1.541195 (-0.055931) | 1.496885 / 1.468490 (0.028395) | 0.570120 / 4.584777 (-4.014657) | 2.487532 / 3.745712 (-1.258180) | 2.761552 / 5.269862 (-2.508309) | 1.731864 / 4.565676 (-2.833812) | 0.062989 / 0.424275 (-0.361286) | 0.005428 / 0.007607 (-0.002179) | 0.354932 / 0.226044 (0.128888) | 3.524475 / 2.268929 (1.255547) | 1.977684 / 55.444624 (-53.466941) | 1.692568 / 6.876477 (-5.183909) | 1.673003 / 2.142072 (-0.469069) | 0.643976 / 4.805227 (-4.161251) | 0.116499 / 6.500664 (-6.384165) | 0.040772 / 0.075469 (-0.034697) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.020354 / 1.841788 (-0.821434) | 12.143991 / 8.074308 (4.069683) | 10.354058 / 10.191392 (0.162666) | 0.145460 / 0.680424 (-0.534964) | 0.015356 / 0.534201 (-0.518845) | 0.307190 / 0.579283 (-0.272093) | 0.276664 / 0.434364 (-0.157699) | 0.350068 / 0.540337 (-0.190269) | 0.440824 / 1.386936 (-0.946112) |\n\n</details>\n</details>\n\n\n"
] | 2024-04-09T14:45:12Z
| 2024-04-17T08:41:23Z
| 2024-04-12T15:27:04Z
|
MEMBER
| null | null | null |
Close #6690.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6795/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6795/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6795.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6795",
"merged_at": "2024-04-12T15:27:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6795.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6795"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7048
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7048/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7048/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7048/events
|
https://github.com/huggingface/datasets/issues/7048
| 2,408,487,547
|
I_kwDODunzps6Pjpp7
| 7,048
|
ImportError: numpy.core.multiarray when using `filter`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/45195979?v=4",
"events_url": "https://api.github.com/users/kamilakesbi/events{/privacy}",
"followers_url": "https://api.github.com/users/kamilakesbi/followers",
"following_url": "https://api.github.com/users/kamilakesbi/following{/other_user}",
"gists_url": "https://api.github.com/users/kamilakesbi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kamilakesbi",
"id": 45195979,
"login": "kamilakesbi",
"node_id": "MDQ6VXNlcjQ1MTk1OTc5",
"organizations_url": "https://api.github.com/users/kamilakesbi/orgs",
"received_events_url": "https://api.github.com/users/kamilakesbi/received_events",
"repos_url": "https://api.github.com/users/kamilakesbi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kamilakesbi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kamilakesbi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kamilakesbi",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Could you please check your `numpy` version?",
"I got this issue while using numpy version 2.0. \r\n\r\nI solved it by switching back to numpy 1.26.0 :) ",
"We recently added support for numpy 2.0, but it is not released yet.",
"Ok I see, thanks! I think we can close this issue for now as switching back to version 1.26.0 solves the problem :) "
] | 2024-07-15T11:21:04Z
| 2024-07-16T10:11:25Z
| 2024-07-16T10:11:25Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I can't apply the filter method on my dataset.
### Steps to reproduce the bug
The following snippet generates a bug:
```python
from datasets import load_dataset
ami = load_dataset('kamilakesbi/ami', 'ihm')
ami['train'].filter(
lambda example: example["file_name"] == 'EN2001a'
)
```
I get the following error:
`ImportError: numpy.core.multiarray failed to import (auto-generated because you didn't call 'numpy.import_array()' after cimporting numpy; use '<void>numpy._import_array' to disable if you are certain you don't need it).`
### Expected behavior
It should work properly!
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- `huggingface_hub` version: 0.23.4
- PyArrow version: 16.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/45195979?v=4",
"events_url": "https://api.github.com/users/kamilakesbi/events{/privacy}",
"followers_url": "https://api.github.com/users/kamilakesbi/followers",
"following_url": "https://api.github.com/users/kamilakesbi/following{/other_user}",
"gists_url": "https://api.github.com/users/kamilakesbi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kamilakesbi",
"id": 45195979,
"login": "kamilakesbi",
"node_id": "MDQ6VXNlcjQ1MTk1OTc5",
"organizations_url": "https://api.github.com/users/kamilakesbi/orgs",
"received_events_url": "https://api.github.com/users/kamilakesbi/received_events",
"repos_url": "https://api.github.com/users/kamilakesbi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kamilakesbi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kamilakesbi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kamilakesbi",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7048/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7048/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5891
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5891/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5891/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5891/events
|
https://github.com/huggingface/datasets/pull/5891
| 1,722,384,135
|
PR_kwDODunzps5RKchn
| 5,891
|
Make split slicing consistent with list slicing
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5891). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006916 / 0.011353 (-0.004437) | 0.004749 / 0.011008 (-0.006259) | 0.096086 / 0.038508 (0.057578) | 0.035448 / 0.023109 (0.012338) | 0.299645 / 0.275898 (0.023747) | 0.331279 / 0.323480 (0.007799) | 0.006018 / 0.007986 (-0.001968) | 0.004210 / 0.004328 (-0.000118) | 0.072998 / 0.004250 (0.068747) | 0.050082 / 0.037052 (0.013030) | 0.297714 / 0.258489 (0.039225) | 0.365523 / 0.293841 (0.071682) | 0.028081 / 0.128546 (-0.100465) | 0.009072 / 0.075646 (-0.066574) | 0.327628 / 0.419271 (-0.091643) | 0.051165 / 0.043533 (0.007633) | 0.295091 / 0.255139 (0.039952) | 0.320052 / 0.283200 (0.036852) | 0.109841 / 0.141683 (-0.031842) | 1.467867 / 1.452155 (0.015712) | 1.572600 / 1.492716 (0.079884) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.281490 / 0.018006 (0.263484) | 0.499259 / 0.000490 (0.498770) | 0.000691 / 0.000200 (0.000491) | 0.000062 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027548 / 0.037411 (-0.009863) | 0.106592 / 0.014526 (0.092066) | 0.118654 / 0.176557 (-0.057902) | 0.174313 / 0.737135 (-0.562822) | 0.124491 / 0.296338 (-0.171848) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.399674 / 0.215209 (0.184465) | 3.984092 / 2.077655 (1.906437) | 1.790935 / 1.504120 (0.286815) | 1.593612 / 1.541195 (0.052417) | 1.694595 / 1.468490 (0.226105) | 0.517588 / 4.584777 (-4.067189) | 3.724353 / 3.745712 (-0.021359) | 3.244807 / 5.269862 (-2.025054) | 1.602929 / 4.565676 (-2.962748) | 0.065334 / 0.424275 (-0.358941) | 0.012259 / 0.007607 (0.004652) | 0.501355 / 0.226044 (0.275311) | 4.996546 / 2.268929 (2.727618) | 2.279333 / 55.444624 (-53.165291) | 1.940126 / 6.876477 (-4.936351) | 2.122945 / 2.142072 (-0.019128) | 0.626104 / 4.805227 (-4.179123) | 0.141278 / 6.500664 (-6.359386) | 0.064522 / 0.075469 (-0.010947) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.195351 / 1.841788 (-0.646436) | 15.258932 / 8.074308 (7.184624) | 14.627623 / 10.191392 (4.436231) | 0.266897 / 0.680424 (-0.413527) | 0.017557 / 0.534201 (-0.516644) | 0.392932 / 0.579283 (-0.186351) | 0.416409 / 0.434364 (-0.017955) | 0.469100 / 0.540337 (-0.071237) | 0.556247 / 1.386936 (-0.830689) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006880 / 0.011353 (-0.004473) | 0.004837 / 0.011008 (-0.006171) | 0.074518 / 0.038508 (0.036010) | 0.034204 / 0.023109 (0.011095) | 0.365100 / 0.275898 (0.089202) | 0.394976 / 0.323480 (0.071496) | 0.006364 / 0.007986 (-0.001621) | 0.004269 / 0.004328 (-0.000060) | 0.073531 / 0.004250 (0.069281) | 0.051334 / 0.037052 (0.014281) | 0.373904 / 0.258489 (0.115415) | 0.413662 / 0.293841 (0.119821) | 0.028779 / 0.128546 (-0.099767) | 0.009292 / 0.075646 (-0.066354) | 0.081574 / 0.419271 (-0.337698) | 0.046531 / 0.043533 (0.002998) | 0.368995 / 0.255139 (0.113856) | 0.376938 / 0.283200 (0.093739) | 0.112576 / 0.141683 (-0.029107) | 1.458880 / 1.452155 (0.006725) | 1.550918 / 1.492716 (0.058202) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.319521 / 0.018006 (0.301515) | 0.510146 / 0.000490 (0.509656) | 0.000438 / 0.000200 (0.000238) | 0.000059 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033082 / 0.037411 (-0.004329) | 0.118009 / 0.014526 (0.103483) | 0.127108 / 0.176557 (-0.049448) | 0.176600 / 0.737135 (-0.560535) | 0.133790 / 0.296338 (-0.162549) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437360 / 0.215209 (0.222151) | 4.367426 / 2.077655 (2.289771) | 2.193646 / 1.504120 (0.689526) | 2.025002 / 1.541195 (0.483808) | 2.142347 / 1.468490 (0.673856) | 0.525497 / 4.584777 (-4.059280) | 3.751275 / 3.745712 (0.005563) | 1.912271 / 5.269862 (-3.357590) | 1.087286 / 4.565676 (-3.478390) | 0.066328 / 0.424275 (-0.357947) | 0.011904 / 0.007607 (0.004297) | 0.545870 / 0.226044 (0.319825) | 5.434481 / 2.268929 (3.165552) | 2.719745 / 55.444624 (-52.724880) | 2.445001 / 6.876477 (-4.431476) | 2.500205 / 2.142072 (0.358133) | 0.645735 / 4.805227 (-4.159492) | 0.144210 / 6.500664 (-6.356455) | 0.065688 / 0.075469 (-0.009781) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.273522 / 1.841788 (-0.568265) | 15.771778 / 8.074308 (7.697470) | 14.685261 / 10.191392 (4.493869) | 0.176523 / 0.680424 (-0.503900) | 0.017877 / 0.534201 (-0.516324) | 0.392687 / 0.579283 (-0.186596) | 0.449992 / 0.434364 (0.015628) | 0.462851 / 0.540337 (-0.077487) | 0.560178 / 1.386936 (-0.826758) |\n\n</details>\n</details>\n\n\n",
"Just curious how's this PR going? I was facing similar issues.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005182 / 0.011353 (-0.006171) | 0.003642 / 0.011008 (-0.007366) | 0.063225 / 0.038508 (0.024717) | 0.030534 / 0.023109 (0.007425) | 0.247135 / 0.275898 (-0.028763) | 0.269880 / 0.323480 (-0.053600) | 0.003029 / 0.007986 (-0.004956) | 0.002656 / 0.004328 (-0.001673) | 0.048647 / 0.004250 (0.044397) | 0.043300 / 0.037052 (0.006247) | 0.261586 / 0.258489 (0.003097) | 0.288003 / 0.293841 (-0.005838) | 0.029556 / 0.128546 (-0.098990) | 0.010604 / 0.075646 (-0.065042) | 0.208228 / 0.419271 (-0.211043) | 0.036079 / 0.043533 (-0.007454) | 0.255650 / 0.255139 (0.000511) | 0.283756 / 0.283200 (0.000556) | 0.017992 / 0.141683 (-0.123691) | 1.134861 / 1.452155 (-0.317293) | 1.165310 / 1.492716 (-0.327406) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090709 / 0.018006 (0.072702) | 0.301131 / 0.000490 (0.300641) | 0.000211 / 0.000200 (0.000011) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018186 / 0.037411 (-0.019225) | 0.061704 / 0.014526 (0.047178) | 0.074085 / 0.176557 (-0.102471) | 0.119107 / 0.737135 (-0.618029) | 0.074166 / 0.296338 (-0.222172) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287430 / 0.215209 (0.072221) | 2.832602 / 2.077655 (0.754947) | 1.485971 / 1.504120 (-0.018149) | 1.366806 / 1.541195 (-0.174388) | 1.359044 / 1.468490 (-0.109446) | 0.583573 / 4.584777 (-4.001204) | 2.376348 / 3.745712 (-1.369364) | 2.766067 / 5.269862 (-2.503795) | 1.732066 / 4.565676 (-2.833610) | 0.064489 / 0.424275 (-0.359786) | 0.004974 / 0.007607 (-0.002633) | 0.343600 / 0.226044 (0.117555) | 3.392277 / 2.268929 (1.123349) | 1.840875 / 55.444624 (-53.603750) | 1.543068 / 6.876477 (-5.333409) | 1.573766 / 2.142072 (-0.568307) | 0.651920 / 4.805227 (-4.153308) | 0.117797 / 6.500664 (-6.382867) | 0.042248 / 0.075469 (-0.033221) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.976276 / 1.841788 (-0.865511) | 11.386207 / 8.074308 (3.311899) | 10.473297 / 10.191392 (0.281905) | 0.155482 / 0.680424 (-0.524942) | 0.014182 / 0.534201 (-0.520019) | 0.288501 / 0.579283 (-0.290782) | 0.263505 / 0.434364 (-0.170859) | 0.325396 / 0.540337 (-0.214942) | 0.428070 / 1.386936 (-0.958866) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005310 / 0.011353 (-0.006043) | 0.003510 / 0.011008 (-0.007498) | 0.049418 / 0.038508 (0.010910) | 0.031668 / 0.023109 (0.008559) | 0.266345 / 0.275898 (-0.009553) | 0.289230 / 0.323480 (-0.034249) | 0.004168 / 0.007986 (-0.003818) | 0.002769 / 0.004328 (-0.001559) | 0.049786 / 0.004250 (0.045536) | 0.044009 / 0.037052 (0.006957) | 0.281882 / 0.258489 (0.023393) | 0.309962 / 0.293841 (0.016121) | 0.047216 / 0.128546 (-0.081330) | 0.010661 / 0.075646 (-0.064986) | 0.058619 / 0.419271 (-0.360652) | 0.034658 / 0.043533 (-0.008875) | 0.269676 / 0.255139 (0.014537) | 0.288581 / 0.283200 (0.005381) | 0.018159 / 0.141683 (-0.123523) | 1.177047 / 1.452155 (-0.275107) | 1.206391 / 1.492716 (-0.286325) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091422 / 0.018006 (0.073416) | 0.301936 / 0.000490 (0.301446) | 0.000204 / 0.000200 (0.000004) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022347 / 0.037411 (-0.015064) | 0.075856 / 0.014526 (0.061330) | 0.086459 / 0.176557 (-0.090097) | 0.124683 / 0.737135 (-0.612452) | 0.087559 / 0.296338 (-0.208779) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287423 / 0.215209 (0.072214) | 2.840060 / 2.077655 (0.762405) | 1.561290 / 1.504120 (0.057170) | 1.442124 / 1.541195 (-0.099071) | 1.458619 / 1.468490 (-0.009871) | 0.578217 / 4.584777 (-4.006560) | 2.450982 / 3.745712 (-1.294731) | 2.685603 / 5.269862 (-2.584259) | 1.750036 / 4.565676 (-2.815640) | 0.063797 / 0.424275 (-0.360478) | 0.005158 / 0.007607 (-0.002449) | 0.342598 / 0.226044 (0.116553) | 3.356456 / 2.268929 (1.087527) | 1.913493 / 55.444624 (-53.531132) | 1.638930 / 6.876477 (-5.237547) | 1.751691 / 2.142072 (-0.390382) | 0.662609 / 4.805227 (-4.142619) | 0.117465 / 6.500664 (-6.383199) | 0.041316 / 0.075469 (-0.034153) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.019255 / 1.841788 (-0.822533) | 12.084293 / 8.074308 (4.009985) | 10.957918 / 10.191392 (0.766526) | 0.142433 / 0.680424 (-0.537991) | 0.015969 / 0.534201 (-0.518232) | 0.292411 / 0.579283 (-0.286872) | 0.278925 / 0.434364 (-0.155439) | 0.329967 / 0.540337 (-0.210370) | 0.421786 / 1.386936 (-0.965150) |\n\n</details>\n</details>\n\n\n"
] | 2023-05-23T16:04:33Z
| 2024-01-31T16:00:26Z
| 2024-01-31T15:54:17Z
|
COLLABORATOR
| null | null | null |
Fix #1774, fix #5875
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5891/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5891/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5891.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5891",
"merged_at": "2024-01-31T15:54:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5891.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5891"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5876
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5876/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5876/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5876/events
|
https://github.com/huggingface/datasets/issues/5876
| 1,717,978,985
|
I_kwDODunzps5mZkdp
| 5,876
|
Incompatibility with DataLab
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26192135?v=4",
"events_url": "https://api.github.com/users/helpmefindaname/events{/privacy}",
"followers_url": "https://api.github.com/users/helpmefindaname/followers",
"following_url": "https://api.github.com/users/helpmefindaname/following{/other_user}",
"gists_url": "https://api.github.com/users/helpmefindaname/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/helpmefindaname",
"id": 26192135,
"login": "helpmefindaname",
"node_id": "MDQ6VXNlcjI2MTkyMTM1",
"organizations_url": "https://api.github.com/users/helpmefindaname/orgs",
"received_events_url": "https://api.github.com/users/helpmefindaname/received_events",
"repos_url": "https://api.github.com/users/helpmefindaname/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/helpmefindaname/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/helpmefindaname/subscriptions",
"type": "User",
"url": "https://api.github.com/users/helpmefindaname",
"user_view_type": "public"
}
|
[
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] |
closed
| false
| null |
[] | null |
[
"Indeed, `clobber=True` (with a warning if the existing protocol will be overwritten) should fix the issue, but maybe a better solution is to register our compression filesystem before the script is executed and unregister them afterward. WDYT @lhoestq @albertvillanova?",
"I think we should use clobber and show a warning if it overwrote a registered filesystem indeed ! This way the user can re-register the filesystems if needed. Though they should probably be compatible (and maybe do the exact same thing) so I wouldn't de-register the `datasets` filesystems"
] | 2023-05-20T01:39:11Z
| 2023-05-25T06:42:34Z
| 2023-05-25T06:42:34Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Hello,
I am currently working on a project where both [DataLab](https://github.com/ExpressAI/DataLab) and [datasets](https://github.com/huggingface/datasets) are subdependencies.
I noticed that I cannot import both libraries, as they both register FileSystems in `fsspec`, expecting the FileSystems not being registered before.
When running the code below, I get the following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\__init__.py", line 28, in <module>
from datalabs.arrow_dataset import concatenate_datasets, Dataset
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\arrow_dataset.py", line 60, in <module>
from datalabs.arrow_writer import ArrowWriter, OptimizedTypedSequence
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\arrow_writer.py", line 28, in <module>
from datalabs.features import (
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\features\__init__.py", line 2, in <module>
from datalabs.features.audio import Audio
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\features\audio.py", line 21, in <module>
from datalabs.utils.streaming_download_manager import xopen
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\utils\streaming_download_manager.py", line 16, in <module>
from datalabs.filesystems import COMPRESSION_FILESYSTEMS
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\filesystems\__init__.py", line 37, in <module>
fsspec.register_implementation(fs_class.protocol, fs_class)
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\fsspec\registry.py", line 51, in register_implementation
raise ValueError(
ValueError: Name (bz2) already in the registry and clobber is False
```
I think as simple solution would be to just set `clobber=True` in https://github.com/huggingface/datasets/blob/main/src/datasets/filesystems/__init__.py#L28. This allows the register to discard previous registrations. This should work, as the datalabs FileSystems are copies of the datasets FileSystems. However, I don't know if it is guaranteed to be compatible with other libraries that might use the same protocols.
I am linking the symmetric issue on [DataLab](https://github.com/ExpressAI/DataLab/issues/425) as ideally the issue is solved in both libraries the same way. Otherwise, it could lead to different behaviors depending on which library gets imported first.
### Steps to reproduce the bug
1. Run `pip install datalabs==0.4.15 datasets==2.12.0`
2. Run the following python code:
```
import datalabs
import datasets
```
### Expected behavior
It should be possible to import both libraries without getting a Value Error
### Environment info
datalabs==0.4.15
datasets==2.12.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5876/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5876/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6540
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6540/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6540/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6540/events
|
https://github.com/huggingface/datasets/issues/6540
| 2,058,965,157
|
I_kwDODunzps56uVCl
| 6,540
|
Extreme inefficiency for `save_to_disk` when merging datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/43512683?v=4",
"events_url": "https://api.github.com/users/KatarinaYuan/events{/privacy}",
"followers_url": "https://api.github.com/users/KatarinaYuan/followers",
"following_url": "https://api.github.com/users/KatarinaYuan/following{/other_user}",
"gists_url": "https://api.github.com/users/KatarinaYuan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/KatarinaYuan",
"id": 43512683,
"login": "KatarinaYuan",
"node_id": "MDQ6VXNlcjQzNTEyNjgz",
"organizations_url": "https://api.github.com/users/KatarinaYuan/orgs",
"received_events_url": "https://api.github.com/users/KatarinaYuan/received_events",
"repos_url": "https://api.github.com/users/KatarinaYuan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/KatarinaYuan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KatarinaYuan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/KatarinaYuan",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Concatenating datasets doesn't create any indices mapping - so flattening indices is not needed (unless you shuffle the dataset).\r\nCan you share the snippet of code you are using to merge your datasets and save them to disk ?"
] | 2023-12-29T00:44:35Z
| 2023-12-30T15:05:48Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Hi, I tried to merge in total 22M sequences of data, where each sequence is of maximum length 2000. I found that merging these datasets and then `save_to_disk` is extremely slow because of flattening the indices. Wondering if you have any suggestions or guidance on this. Thank you very much!
### Steps to reproduce the bug
The source data is too big to demonstrate
### Expected behavior
The source data is too big to demonstrate
### Environment info
python 3.9.0
datasets 2.7.0
pytorch 2.0.0
tokenizers 0.13.1
transformers 4.31.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6540/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6540/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6416
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6416/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6416/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6416/events
|
https://github.com/huggingface/datasets/pull/6416
| 1,992,954,723
|
PR_kwDODunzps5fbA4H
| 6,416
|
Rename audio_classificiation.py to audio_classification.py
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1595907?v=4",
"events_url": "https://api.github.com/users/carlthome/events{/privacy}",
"followers_url": "https://api.github.com/users/carlthome/followers",
"following_url": "https://api.github.com/users/carlthome/following{/other_user}",
"gists_url": "https://api.github.com/users/carlthome/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/carlthome",
"id": 1595907,
"login": "carlthome",
"node_id": "MDQ6VXNlcjE1OTU5MDc=",
"organizations_url": "https://api.github.com/users/carlthome/orgs",
"received_events_url": "https://api.github.com/users/carlthome/received_events",
"repos_url": "https://api.github.com/users/carlthome/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/carlthome/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/carlthome/subscriptions",
"type": "User",
"url": "https://api.github.com/users/carlthome",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Oh good catch. Can you also rename it in `src/datasets/tasks/__init__.py` ?",
"Fixed! \r\n\r\n(I think, tough word to spell right TBH)",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004737 / 0.011353 (-0.006616) | 0.002446 / 0.011008 (-0.008563) | 0.060928 / 0.038508 (0.022420) | 0.030479 / 0.023109 (0.007370) | 0.238385 / 0.275898 (-0.037513) | 0.265563 / 0.323480 (-0.057917) | 0.002910 / 0.007986 (-0.005076) | 0.002325 / 0.004328 (-0.002004) | 0.047817 / 0.004250 (0.043566) | 0.044243 / 0.037052 (0.007191) | 0.245190 / 0.258489 (-0.013299) | 0.275449 / 0.293841 (-0.018392) | 0.023384 / 0.128546 (-0.105162) | 0.006820 / 0.075646 (-0.068826) | 0.201488 / 0.419271 (-0.217783) | 0.057758 / 0.043533 (0.014225) | 0.245279 / 0.255139 (-0.009860) | 0.266094 / 0.283200 (-0.017106) | 0.019254 / 0.141683 (-0.122429) | 1.107497 / 1.452155 (-0.344658) | 1.161412 / 1.492716 (-0.331304) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094909 / 0.018006 (0.076903) | 0.305185 / 0.000490 (0.304695) | 0.000221 / 0.000200 (0.000021) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018352 / 0.037411 (-0.019059) | 0.062441 / 0.014526 (0.047915) | 0.072386 / 0.176557 (-0.104171) | 0.118836 / 0.737135 (-0.618299) | 0.074514 / 0.296338 (-0.221824) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283632 / 0.215209 (0.068423) | 2.751845 / 2.077655 (0.674190) | 1.478620 / 1.504120 (-0.025499) | 1.357221 / 1.541195 (-0.183974) | 1.415297 / 1.468490 (-0.053194) | 0.400093 / 4.584777 (-4.184684) | 2.404607 / 3.745712 (-1.341105) | 2.617572 / 5.269862 (-2.652289) | 1.587622 / 4.565676 (-2.978055) | 0.045997 / 0.424275 (-0.378278) | 0.004872 / 0.007607 (-0.002735) | 0.338901 / 0.226044 (0.112856) | 3.371362 / 2.268929 (1.102434) | 1.870469 / 55.444624 (-53.574155) | 1.561670 / 6.876477 (-5.314807) | 1.573186 / 2.142072 (-0.568886) | 0.478735 / 4.805227 (-4.326492) | 0.098743 / 6.500664 (-6.401921) | 0.041780 / 0.075469 (-0.033689) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.945422 / 1.841788 (-0.896366) | 11.563464 / 8.074308 (3.489156) | 10.368731 / 10.191392 (0.177339) | 0.129910 / 0.680424 (-0.550513) | 0.014014 / 0.534201 (-0.520187) | 0.269036 / 0.579283 (-0.310247) | 0.265516 / 0.434364 (-0.168848) | 0.311082 / 0.540337 (-0.229255) | 0.431510 / 1.386936 (-0.955426) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005068 / 0.011353 (-0.006284) | 0.002989 / 0.011008 (-0.008019) | 0.048213 / 0.038508 (0.009705) | 0.056133 / 0.023109 (0.033024) | 0.283347 / 0.275898 (0.007449) | 0.307505 / 0.323480 (-0.015975) | 0.004041 / 0.007986 (-0.003944) | 0.002477 / 0.004328 (-0.001852) | 0.047771 / 0.004250 (0.043521) | 0.039361 / 0.037052 (0.002309) | 0.283764 / 0.258489 (0.025275) | 0.320644 / 0.293841 (0.026803) | 0.024972 / 0.128546 (-0.103575) | 0.007599 / 0.075646 (-0.068048) | 0.054732 / 0.419271 (-0.364539) | 0.032774 / 0.043533 (-0.010759) | 0.285594 / 0.255139 (0.030455) | 0.301500 / 0.283200 (0.018300) | 0.018181 / 0.141683 (-0.123501) | 1.126311 / 1.452155 (-0.325843) | 1.187147 / 1.492716 (-0.305569) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097397 / 0.018006 (0.079391) | 0.315112 / 0.000490 (0.314622) | 0.000224 / 0.000200 (0.000024) | 0.000056 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021529 / 0.037411 (-0.015882) | 0.073208 / 0.014526 (0.058682) | 0.081683 / 0.176557 (-0.094874) | 0.120475 / 0.737135 (-0.616660) | 0.083265 / 0.296338 (-0.213073) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289976 / 0.215209 (0.074767) | 2.839860 / 2.077655 (0.762205) | 1.592635 / 1.504120 (0.088515) | 1.466722 / 1.541195 (-0.074472) | 1.552850 / 1.468490 (0.084360) | 0.418693 / 4.584777 (-4.166084) | 2.526620 / 3.745712 (-1.219093) | 2.706182 / 5.269862 (-2.563680) | 1.618514 / 4.565676 (-2.947162) | 0.046303 / 0.424275 (-0.377972) | 0.004873 / 0.007607 (-0.002734) | 0.345146 / 0.226044 (0.119102) | 3.378448 / 2.268929 (1.109520) | 1.986393 / 55.444624 (-53.458231) | 1.681838 / 6.876477 (-5.194639) | 1.738093 / 2.142072 (-0.403980) | 0.484386 / 4.805227 (-4.320842) | 0.100693 / 6.500664 (-6.399971) | 0.043084 / 0.075469 (-0.032385) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.976399 / 1.841788 (-0.865389) | 13.122968 / 8.074308 (5.048660) | 11.245031 / 10.191392 (1.053639) | 0.134433 / 0.680424 (-0.545991) | 0.017439 / 0.534201 (-0.516762) | 0.274083 / 0.579283 (-0.305200) | 0.287353 / 0.434364 (-0.147011) | 0.309231 / 0.540337 (-0.231106) | 0.418003 / 1.386936 (-0.968933) |\n\n</details>\n</details>\n\n\n"
] | 2023-11-14T15:15:29Z
| 2023-11-15T11:59:32Z
| 2023-11-15T11:53:20Z
|
CONTRIBUTOR
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6416/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6416/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6416.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6416",
"merged_at": "2023-11-15T11:53:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6416.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6416"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6018
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6018/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6018/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6018/events
|
https://github.com/huggingface/datasets/pull/6018
| 1,799,411,999
|
PR_kwDODunzps5VOmKY
| 6,018
|
test1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/139256323?v=4",
"events_url": "https://api.github.com/users/ognjenovicj/events{/privacy}",
"followers_url": "https://api.github.com/users/ognjenovicj/followers",
"following_url": "https://api.github.com/users/ognjenovicj/following{/other_user}",
"gists_url": "https://api.github.com/users/ognjenovicj/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ognjenovicj",
"id": 139256323,
"login": "ognjenovicj",
"node_id": "U_kgDOCEziAw",
"organizations_url": "https://api.github.com/users/ognjenovicj/orgs",
"received_events_url": "https://api.github.com/users/ognjenovicj/received_events",
"repos_url": "https://api.github.com/users/ognjenovicj/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ognjenovicj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ognjenovicj/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ognjenovicj",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"We no longer host datasets in this repo. You should use the HF Hub instead."
] | 2023-07-11T17:25:49Z
| 2023-07-20T10:11:41Z
| 2023-07-20T10:11:41Z
|
NONE
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6018/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6018/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6018.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6018",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6018.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6018"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5769
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5769/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5769/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5769/events
|
https://github.com/huggingface/datasets/issues/5769
| 1,673,441,182
|
I_kwDODunzps5jvq-e
| 5,769
|
Tiktoken tokenizers are not pickable
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/22663468?v=4",
"events_url": "https://api.github.com/users/markovalexander/events{/privacy}",
"followers_url": "https://api.github.com/users/markovalexander/followers",
"following_url": "https://api.github.com/users/markovalexander/following{/other_user}",
"gists_url": "https://api.github.com/users/markovalexander/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/markovalexander",
"id": 22663468,
"login": "markovalexander",
"node_id": "MDQ6VXNlcjIyNjYzNDY4",
"organizations_url": "https://api.github.com/users/markovalexander/orgs",
"received_events_url": "https://api.github.com/users/markovalexander/received_events",
"repos_url": "https://api.github.com/users/markovalexander/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/markovalexander/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/markovalexander/subscriptions",
"type": "User",
"url": "https://api.github.com/users/markovalexander",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Thanks for reporting, @markovalexander.\r\n\r\nUnfortunately, I'm not able to reproduce the issue: the `tiktoken` tokenizer can be used within `Dataset.map`, both in my local machine and in a Colab notebook: https://colab.research.google.com/drive/1DhJroZgk0sNFJ2Mrz-jYgrmh9jblXaCG?usp=sharing\r\n\r\nAre you sure you are using `datasets` version 2.11.0?"
] | 2023-04-18T16:07:40Z
| 2023-05-04T18:55:57Z
| 2023-05-04T18:55:57Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Since tiktoken tokenizer is not pickable, it is not possible to use it inside `dataset.map()` with multiprocessing enabled. However, you [made](https://github.com/huggingface/datasets/issues/5536) tiktoken's tokenizers pickable in `datasets==2.10.0` for caching. For some reason, this logic does not work in dataset processing and raises `TypeError: cannot pickle 'builtins.CoreBPE' object`
### Steps to reproduce the bug
```
from datasets import load_dataset
import tiktoken
dataset = load_dataset("stas/openwebtext-10k")
enc = tiktoken.get_encoding("gpt2")
tokenized = dataset.map(
process,
remove_columns=['text'],
desc="tokenizing the OWT splits",
num_proc=2,
)
def process(example):
ids = enc.encode(example['text'])
ids.append(enc.eot_token)
out = {'ids': ids, 'len': len(ids)}
return out
```
### Expected behavior
starts processing dataset
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-5.15.0-1021-oracle-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.13.4
- PyArrow version: 9.0.0
- Pandas version: 2.0.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5769/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5769/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6194
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6194/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6194/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6194/events
|
https://github.com/huggingface/datasets/issues/6194
| 1,872,598,223
|
I_kwDODunzps5vnZTP
| 6,194
|
Support custom fingerprinting with `Dataset.from_generator`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16692099?v=4",
"events_url": "https://api.github.com/users/bilelomrani1/events{/privacy}",
"followers_url": "https://api.github.com/users/bilelomrani1/followers",
"following_url": "https://api.github.com/users/bilelomrani1/following{/other_user}",
"gists_url": "https://api.github.com/users/bilelomrani1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bilelomrani1",
"id": 16692099,
"login": "bilelomrani1",
"node_id": "MDQ6VXNlcjE2NjkyMDk5",
"organizations_url": "https://api.github.com/users/bilelomrani1/orgs",
"received_events_url": "https://api.github.com/users/bilelomrani1/received_events",
"repos_url": "https://api.github.com/users/bilelomrani1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bilelomrani1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bilelomrani1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bilelomrani1",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[
"The `fingerprint` parameter serves a slightly different purpose - we use it to inject a new fingerprint after transforming a `Dataset` (computed from the previous fingerprint + transform + transform args), e.g., to be able to compute the cache file for a transform. There is no concept of `fingerprint` before a `Dataset` is fully initialized, but we still need to hash the args (e.g., generator func) of the \"dataset creation methods\" (`from_generator`, `from_csv`, etc.) to compute the cache directory (to store the initial version and transformed dataset versions)\r\n\r\nI agree it should be easier to bypass the hashing mechanism in this instance, too. However, we should probably first address https://github.com/huggingface/datasets/issues/5080 before solving this (e.g., maybe exposing `hash` in `load_dataset`/`load_dataset_builder`.",
"Adding +1 here:\r\n\r\nIf the generator needs to access some external resources or state, then it's not always straightforward to make it pickle-able. So I'd like to be able to override how the default cache key derivation needs to pickle the generator (and of course, I'd accept responsibility for that part of cache consistency).\r\n\r\nAppears to be a recurrent roadbump: #6118 #5963 #5819 #5750 #4983 ",
"Silly hack incoming:\r\n\r\n```python\r\nimport uuid\r\n\r\nclass _DatasetGeneratorPickleHack:\r\n def __init__(self, generator, generator_id=None):\r\n self.generator = generator\r\n self.generator_id = (\r\n generator_id if generator_id is not None else str(uuid.uuid4())\r\n )\r\n\r\n def __call__(self, *args, **kwargs):\r\n return self.generator(*kwargs, **kwargs)\r\n\r\n def __reduce__(self):\r\n return (_DatasetGeneratorPickleHack_raise, (self.generator_id,))\r\n\r\n\r\ndef _DatasetGeneratorPickleHack_raise(*args, **kwargs):\r\n raise AssertionError(\"cannot actually unpickle _DatasetGeneratorPickleHack!\")\r\n```\r\n\r\nNow `Dataset.from_generator(_DatasetGeneratorPickleHack(gen))` works even if `gen` is unpicklable, because Dataset just pickles the shim object that avoids actually traversing `gen`. Then, one can work out how to set `generator_id` meaningfully to allow cache reuse.",
"I'd like some way to do this too. I find that sometimes the hash doesn't cover enough, and that the dataset is not regenerated even when underlying data has changed, and by supplying a custom fingerprint I could do a better job of controlling when my dataset is regenerated.",
"This is what I did and it works: \r\n\r\nhttps://github.com/stevemadere/s3-datasets/blob/e475a566a16d3051656a66f8ff4d3baa4c55a66c/src/tokengenerators/text_ds_2_tokens_generator.py#L200\r\n",
"I ran into the same thing - my actual generator reads from a disk source that might have new data (images) available at some point and it ends up ignoring calling the generator. Thanks for the hack @mlin 👋 ",
"just wanted to pitch my support for an easy control over the generator id. requiring that generators are pickleable just to get a unique id is limiting: plenty of classes (maybe even hf.datasets own) are written with no pickle support in mind. also as mentioned above the state of a generator might extend beyond its pickle."
] | 2023-08-29T22:43:13Z
| 2024-12-22T01:14:39Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
When using `Dataset.from_generator`, the generator is hashed when building the fingerprint. Similar to `.map`, it would be interesting to let the user bypass this hashing by accepting a `fingerprint` argument to `.from_generator`.
### Motivation
Using the `.from_generator` constructor with a non-picklable generator fails. By accepting a `fingerprint` argument to `.from_generator`, the user would have the opportunity to manually fingerprint the dataset and thus bypass the crash.
### Your contribution
If validated, I can try to submit a PR for this.
| null |
{
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6194/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6194/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5646
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5646/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5646/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5646/events
|
https://github.com/huggingface/datasets/pull/5646
| 1,627,838,762
|
PR_kwDODunzps5MOqjj
| 5,646
|
Allow self as key in `Features`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009980 / 0.011353 (-0.001373) | 0.006643 / 0.011008 (-0.004366) | 0.140722 / 0.038508 (0.102214) | 0.036693 / 0.023109 (0.013584) | 0.430019 / 0.275898 (0.154121) | 0.463218 / 0.323480 (0.139738) | 0.006977 / 0.007986 (-0.001008) | 0.006488 / 0.004328 (0.002160) | 0.099385 / 0.004250 (0.095134) | 0.047160 / 0.037052 (0.010108) | 0.431440 / 0.258489 (0.172951) | 0.500232 / 0.293841 (0.206391) | 0.057968 / 0.128546 (-0.070578) | 0.020197 / 0.075646 (-0.055449) | 0.438269 / 0.419271 (0.018998) | 0.071149 / 0.043533 (0.027617) | 0.428502 / 0.255139 (0.173363) | 0.486861 / 0.283200 (0.203661) | 0.119855 / 0.141683 (-0.021828) | 1.875372 / 1.452155 (0.423218) | 1.955055 / 1.492716 (0.462339) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.243468 / 0.018006 (0.225462) | 0.547842 / 0.000490 (0.547352) | 0.004885 / 0.000200 (0.004685) | 0.000144 / 0.000054 (0.000089) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031555 / 0.037411 (-0.005856) | 0.125869 / 0.014526 (0.111343) | 0.137816 / 0.176557 (-0.038741) | 0.206581 / 0.737135 (-0.530555) | 0.142976 / 0.296338 (-0.153362) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.624773 / 0.215209 (0.409564) | 6.154861 / 2.077655 (4.077206) | 2.504586 / 1.504120 (1.000466) | 1.989118 / 1.541195 (0.447923) | 2.092280 / 1.468490 (0.623790) | 1.240108 / 4.584777 (-3.344669) | 5.584893 / 3.745712 (1.839181) | 3.075369 / 5.269862 (-2.194492) | 2.174285 / 4.565676 (-2.391391) | 0.141555 / 0.424275 (-0.282720) | 0.016099 / 0.007607 (0.008492) | 0.720543 / 0.226044 (0.494498) | 7.489000 / 2.268929 (5.220071) | 3.239189 / 55.444624 (-52.205435) | 2.525772 / 6.876477 (-4.350704) | 2.773514 / 2.142072 (0.631441) | 1.410084 / 4.805227 (-3.395143) | 0.259252 / 6.500664 (-6.241412) | 0.082573 / 0.075469 (0.007104) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.458186 / 1.841788 (-0.383602) | 17.503738 / 8.074308 (9.429430) | 20.817682 / 10.191392 (10.626290) | 0.231221 / 0.680424 (-0.449203) | 0.032550 / 0.534201 (-0.501651) | 0.559020 / 0.579283 (-0.020263) | 0.592987 / 0.434364 (0.158623) | 0.602661 / 0.540337 (0.062324) | 0.731912 / 1.386936 (-0.655024) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009543 / 0.011353 (-0.001810) | 0.006953 / 0.011008 (-0.004055) | 0.087651 / 0.038508 (0.049143) | 0.031717 / 0.023109 (0.008608) | 0.437813 / 0.275898 (0.161915) | 0.468448 / 0.323480 (0.144968) | 0.007378 / 0.007986 (-0.000607) | 0.005170 / 0.004328 (0.000842) | 0.102286 / 0.004250 (0.098035) | 0.043643 / 0.037052 (0.006591) | 0.458788 / 0.258489 (0.200299) | 0.519891 / 0.293841 (0.226050) | 0.052875 / 0.128546 (-0.075671) | 0.020518 / 0.075646 (-0.055128) | 0.112675 / 0.419271 (-0.306597) | 0.066390 / 0.043533 (0.022858) | 0.423037 / 0.255139 (0.167898) | 0.420345 / 0.283200 (0.137146) | 0.119221 / 0.141683 (-0.022462) | 1.632244 / 1.452155 (0.180090) | 1.829585 / 1.492716 (0.336869) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.242312 / 0.018006 (0.224305) | 0.547592 / 0.000490 (0.547102) | 0.006520 / 0.000200 (0.006320) | 0.000185 / 0.000054 (0.000131) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032204 / 0.037411 (-0.005207) | 0.113320 / 0.014526 (0.098794) | 0.135667 / 0.176557 (-0.040889) | 0.194360 / 0.737135 (-0.542775) | 0.127934 / 0.296338 (-0.168404) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.648134 / 0.215209 (0.432925) | 6.470574 / 2.077655 (4.392920) | 2.799121 / 1.504120 (1.295001) | 2.160450 / 1.541195 (0.619255) | 2.261648 / 1.468490 (0.793158) | 1.244660 / 4.584777 (-3.340117) | 5.694636 / 3.745712 (1.948923) | 5.316191 / 5.269862 (0.046329) | 2.764551 / 4.565676 (-1.801126) | 0.152225 / 0.424275 (-0.272051) | 0.015959 / 0.007607 (0.008351) | 0.833606 / 0.226044 (0.607562) | 8.099765 / 2.268929 (5.830836) | 3.523005 / 55.444624 (-51.921620) | 2.855126 / 6.876477 (-4.021351) | 2.730849 / 2.142072 (0.588776) | 1.434351 / 4.805227 (-3.370876) | 0.251963 / 6.500664 (-6.248701) | 0.085718 / 0.075469 (0.010249) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.722466 / 1.841788 (-0.119322) | 17.846981 / 8.074308 (9.772673) | 21.578684 / 10.191392 (11.387292) | 0.239987 / 0.680424 (-0.440437) | 0.029189 / 0.534201 (-0.505012) | 0.543181 / 0.579283 (-0.036102) | 0.626527 / 0.434364 (0.192163) | 0.614334 / 0.540337 (0.073997) | 0.745934 / 1.386936 (-0.641002) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007395 / 0.011353 (-0.003958) | 0.004965 / 0.011008 (-0.006043) | 0.096376 / 0.038508 (0.057868) | 0.033243 / 0.023109 (0.010134) | 0.299990 / 0.275898 (0.024092) | 0.336287 / 0.323480 (0.012807) | 0.005528 / 0.007986 (-0.002458) | 0.004003 / 0.004328 (-0.000326) | 0.072820 / 0.004250 (0.068569) | 0.042867 / 0.037052 (0.005815) | 0.296719 / 0.258489 (0.038230) | 0.337313 / 0.293841 (0.043472) | 0.036809 / 0.128546 (-0.091738) | 0.012239 / 0.075646 (-0.063407) | 0.332351 / 0.419271 (-0.086921) | 0.050449 / 0.043533 (0.006916) | 0.301483 / 0.255139 (0.046344) | 0.316673 / 0.283200 (0.033474) | 0.102526 / 0.141683 (-0.039157) | 1.415429 / 1.452155 (-0.036726) | 1.544381 / 1.492716 (0.051665) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211158 / 0.018006 (0.193152) | 0.434718 / 0.000490 (0.434228) | 0.003386 / 0.000200 (0.003186) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027945 / 0.037411 (-0.009466) | 0.108743 / 0.014526 (0.094217) | 0.119771 / 0.176557 (-0.056785) | 0.178667 / 0.737135 (-0.558468) | 0.123718 / 0.296338 (-0.172620) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.413908 / 0.215209 (0.198699) | 4.136828 / 2.077655 (2.059174) | 1.932547 / 1.504120 (0.428427) | 1.715389 / 1.541195 (0.174194) | 1.791679 / 1.468490 (0.323189) | 0.692715 / 4.584777 (-3.892062) | 3.741807 / 3.745712 (-0.003905) | 2.066274 / 5.269862 (-3.203587) | 1.314106 / 4.565676 (-3.251570) | 0.087191 / 0.424275 (-0.337084) | 0.012866 / 0.007607 (0.005259) | 0.510012 / 0.226044 (0.283968) | 5.116419 / 2.268929 (2.847490) | 2.408562 / 55.444624 (-53.036063) | 2.002044 / 6.876477 (-4.874433) | 2.121868 / 2.142072 (-0.020204) | 0.837141 / 4.805227 (-3.968086) | 0.166596 / 6.500664 (-6.334068) | 0.063190 / 0.075469 (-0.012279) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.204152 / 1.841788 (-0.637636) | 14.739793 / 8.074308 (6.665485) | 14.403469 / 10.191392 (4.212077) | 0.165781 / 0.680424 (-0.514642) | 0.017826 / 0.534201 (-0.516375) | 0.423527 / 0.579283 (-0.155756) | 0.431410 / 0.434364 (-0.002954) | 0.499422 / 0.540337 (-0.040915) | 0.596116 / 1.386936 (-0.790820) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007365 / 0.011353 (-0.003988) | 0.005165 / 0.011008 (-0.005844) | 0.073403 / 0.038508 (0.034895) | 0.032542 / 0.023109 (0.009433) | 0.339304 / 0.275898 (0.063406) | 0.371892 / 0.323480 (0.048412) | 0.005544 / 0.007986 (-0.002442) | 0.004108 / 0.004328 (-0.000221) | 0.073750 / 0.004250 (0.069500) | 0.045613 / 0.037052 (0.008561) | 0.366159 / 0.258489 (0.107670) | 0.389864 / 0.293841 (0.096023) | 0.036006 / 0.128546 (-0.092540) | 0.012402 / 0.075646 (-0.063244) | 0.085137 / 0.419271 (-0.334135) | 0.048485 / 0.043533 (0.004952) | 0.334172 / 0.255139 (0.079033) | 0.353168 / 0.283200 (0.069969) | 0.099393 / 0.141683 (-0.042290) | 1.460584 / 1.452155 (0.008429) | 1.518601 / 1.492716 (0.025885) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227352 / 0.018006 (0.209346) | 0.444211 / 0.000490 (0.443721) | 0.000410 / 0.000200 (0.000210) | 0.000060 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029517 / 0.037411 (-0.007894) | 0.115557 / 0.014526 (0.101031) | 0.125855 / 0.176557 (-0.050701) | 0.175214 / 0.737135 (-0.561922) | 0.129324 / 0.296338 (-0.167014) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.429783 / 0.215209 (0.214574) | 4.301159 / 2.077655 (2.223504) | 2.084939 / 1.504120 (0.580819) | 1.887781 / 1.541195 (0.346586) | 2.045712 / 1.468490 (0.577222) | 0.693319 / 4.584777 (-3.891458) | 3.788595 / 3.745712 (0.042883) | 2.087080 / 5.269862 (-3.182781) | 1.325247 / 4.565676 (-3.240429) | 0.085919 / 0.424275 (-0.338356) | 0.012710 / 0.007607 (0.005103) | 0.533432 / 0.226044 (0.307387) | 5.339468 / 2.268929 (3.070540) | 2.578351 / 55.444624 (-52.866273) | 2.224905 / 6.876477 (-4.651572) | 2.301064 / 2.142072 (0.158992) | 0.839622 / 4.805227 (-3.965605) | 0.166523 / 6.500664 (-6.334141) | 0.065254 / 0.075469 (-0.010215) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.262223 / 1.841788 (-0.579565) | 15.042523 / 8.074308 (6.968215) | 14.542719 / 10.191392 (4.351327) | 0.142230 / 0.680424 (-0.538194) | 0.017610 / 0.534201 (-0.516591) | 0.422357 / 0.579283 (-0.156926) | 0.417785 / 0.434364 (-0.016579) | 0.491990 / 0.540337 (-0.048348) | 0.585835 / 1.386936 (-0.801101) |\n\n</details>\n</details>\n\n\n"
] | 2023-03-16T16:17:03Z
| 2023-03-16T17:21:58Z
| 2023-03-16T17:14:50Z
|
COLLABORATOR
| null | null | null |
Fix #5641
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5646/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5646/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5646.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5646",
"merged_at": "2023-03-16T17:14:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5646.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5646"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5127
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5127/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5127/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5127/events
|
https://github.com/huggingface/datasets/pull/5127
| 1,411,897,544
|
PR_kwDODunzps5A8m-Q
| 5,127
|
[WIP] WebDataset export
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5127). All of your documentation changes will be reflected on that endpoint.",
"Should we close this PR?"
] | 2022-10-17T16:50:22Z
| 2024-01-11T06:27:04Z
| 2024-01-08T14:25:43Z
|
MEMBER
| null | null | null |
I added a first draft of the `IterableDataset.to_wds` method.
You can use it to savea dataset loaded in streamign mode as a webdataset locally.
The API can be further improved to allow to export to a cloud storage like the HF Hub.
I also included sharding with a default max shard size of 500MB (uncompressed), and it is single-processed fo rnow.
Choosing the number of shards is not implemented yet - though if we know the size of the `IterableDataset` this is probably doable`.
For example
```python
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", split="train", streaming=True)
>>> ds.to_wds("output_dir", compress=True)
>>> import webdataset as wds
>>> ds = wds.WebDataset("output_dir/rotten_tomatoes-train-000000.tar.gz").decode()
>>> next(iter(ds))
{'__key__': '0',
'__url__': 'output_dir/rotten_tomatoes-train-000000.tar.gz',
'label.cls': 1,
'text.txt': 'the rock is destined to be the 21st century\'s new ..., jean-claud van damme or steven segal .'}
```
### Implementation details
The WebDataset format is made of TAR archives containing a series of files per example. For example one pair of `image.jpg` and `label.cls` for image classification.
WebDataset automatically decodes serialized data based on the extension of the files, and output a dictionary. For example `{"image.png": np.array(...), "label.cls": 0}` if you choose the numpy decoding.
To use the automatic decoding, I store each field of each example as a file with its corresponding extension (jpg, json, cls, etc.)
While this is useful to end up with a dictionary with one key per column and appropriate decoding, it can create huge TAR archives if the dataset is made of small samples of text - probably because of useless TAR metadata for each file. This also makes loading super slow: iterating on SQuAD takes 50sec vs 7sec using `datasets` in streaming mode.
I haven't taken a look at alternatives for text datasets made out of small samples, but for image datasets this can already be used to run some benchmarks.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5127/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5127/timeline
| null | null | 1
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5127.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5127",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5127.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5127"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7072
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7072/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7072/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7072/events
|
https://github.com/huggingface/datasets/issues/7072
| 2,430,577,916
|
I_kwDODunzps6Q36z8
| 7,072
|
nm
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26392883?v=4",
"events_url": "https://api.github.com/users/brettdavies/events{/privacy}",
"followers_url": "https://api.github.com/users/brettdavies/followers",
"following_url": "https://api.github.com/users/brettdavies/following{/other_user}",
"gists_url": "https://api.github.com/users/brettdavies/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/brettdavies",
"id": 26392883,
"login": "brettdavies",
"node_id": "MDQ6VXNlcjI2MzkyODgz",
"organizations_url": "https://api.github.com/users/brettdavies/orgs",
"received_events_url": "https://api.github.com/users/brettdavies/received_events",
"repos_url": "https://api.github.com/users/brettdavies/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/brettdavies/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brettdavies/subscriptions",
"type": "User",
"url": "https://api.github.com/users/brettdavies",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2024-07-25T17:03:24Z
| 2024-07-25T20:36:11Z
| 2024-07-25T20:36:11Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
| null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/26392883?v=4",
"events_url": "https://api.github.com/users/brettdavies/events{/privacy}",
"followers_url": "https://api.github.com/users/brettdavies/followers",
"following_url": "https://api.github.com/users/brettdavies/following{/other_user}",
"gists_url": "https://api.github.com/users/brettdavies/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/brettdavies",
"id": 26392883,
"login": "brettdavies",
"node_id": "MDQ6VXNlcjI2MzkyODgz",
"organizations_url": "https://api.github.com/users/brettdavies/orgs",
"received_events_url": "https://api.github.com/users/brettdavies/received_events",
"repos_url": "https://api.github.com/users/brettdavies/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/brettdavies/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brettdavies/subscriptions",
"type": "User",
"url": "https://api.github.com/users/brettdavies",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7072/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7072/timeline
| null |
not_planned
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6380
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6380/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6380/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6380/events
|
https://github.com/huggingface/datasets/pull/6380
| 1,974,741,221
|
PR_kwDODunzps5edaO6
| 6,380
|
Fix for continuation behaviour on broken dataset archives due to starving download connections via HTTP-GET
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/49956579?v=4",
"events_url": "https://api.github.com/users/RuntimeRacer/events{/privacy}",
"followers_url": "https://api.github.com/users/RuntimeRacer/followers",
"following_url": "https://api.github.com/users/RuntimeRacer/following{/other_user}",
"gists_url": "https://api.github.com/users/RuntimeRacer/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/RuntimeRacer",
"id": 49956579,
"login": "RuntimeRacer",
"node_id": "MDQ6VXNlcjQ5OTU2NTc5",
"organizations_url": "https://api.github.com/users/RuntimeRacer/orgs",
"received_events_url": "https://api.github.com/users/RuntimeRacer/received_events",
"repos_url": "https://api.github.com/users/RuntimeRacer/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/RuntimeRacer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RuntimeRacer/subscriptions",
"type": "User",
"url": "https://api.github.com/users/RuntimeRacer",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2023-11-02T17:28:23Z
| 2023-11-02T17:31:19Z
| null |
NONE
| null | null | null |
This PR proposes a (slightly hacky) fix for an Issue that can occur when downloading large dataset parts over unstable connections.
The underlying issue is also being discussed in https://github.com/huggingface/datasets/issues/5594.
Issue Symptoms & Behaviour:
- Download of a large archive file during dataset download via HTTP-GET fails.
- An silent net exception (which I was unable to identify) is thrown within the `tqdm` download progress.
- Due to missing exception catch code, the above process just continues processing, assuming `http_get` completed successfully.
- Pending Archive file gets renamed to remove the `.incomplete` extension, despite not all data has been downloaded.
- Also, for reasons I did not investigate, there seems to be no real integrity check for the downloaded files; or it does not detect this problem. This is especially problematic, since the downloader script won't retry downloading this archive after CRC-Checking, even if it is being manually restarted / executed again after running into errors on extraction.
Fix proposal: Adding a retry mechanic for HTTP-GET downloads, which adds the following behaviour:
- Download Progress Thread checks for download size validity in case the HTTP connection starves mid download. If the check fails, a RuntimeError is thrown
- Cache Downloader code with retry mechanic monitors for an exception thrown by the download progress thread, and retries download with updated `resume_size`.
- Cache Downloader will not mark incomplete files which have thrown an exception during download, and exceeded retries, as complete.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6380/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6380/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6380.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6380",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6380.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6380"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6633
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6633/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6633/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6633/events
|
https://github.com/huggingface/datasets/pull/6633
| 2,110,124,475
|
PR_kwDODunzps5lknz9
| 6,633
|
dataset viewer requires no-script
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6633). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005172 / 0.011353 (-0.006181) | 0.003694 / 0.011008 (-0.007314) | 0.063098 / 0.038508 (0.024590) | 0.028161 / 0.023109 (0.005052) | 0.262288 / 0.275898 (-0.013610) | 0.281867 / 0.323480 (-0.041613) | 0.004088 / 0.007986 (-0.003898) | 0.002745 / 0.004328 (-0.001583) | 0.049071 / 0.004250 (0.044820) | 0.040629 / 0.037052 (0.003577) | 0.282766 / 0.258489 (0.024277) | 0.297998 / 0.293841 (0.004157) | 0.028057 / 0.128546 (-0.100489) | 0.010878 / 0.075646 (-0.064768) | 0.207410 / 0.419271 (-0.211861) | 0.035600 / 0.043533 (-0.007933) | 0.260157 / 0.255139 (0.005018) | 0.273252 / 0.283200 (-0.009948) | 0.017403 / 0.141683 (-0.124280) | 1.150798 / 1.452155 (-0.301356) | 1.200485 / 1.492716 (-0.292231) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093783 / 0.018006 (0.075777) | 0.302112 / 0.000490 (0.301622) | 0.000225 / 0.000200 (0.000025) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018254 / 0.037411 (-0.019158) | 0.061083 / 0.014526 (0.046557) | 0.074899 / 0.176557 (-0.101657) | 0.119616 / 0.737135 (-0.617520) | 0.075269 / 0.296338 (-0.221069) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.275878 / 0.215209 (0.060669) | 2.694778 / 2.077655 (0.617123) | 1.423810 / 1.504120 (-0.080310) | 1.309444 / 1.541195 (-0.231750) | 1.327898 / 1.468490 (-0.140592) | 0.568621 / 4.584777 (-4.016155) | 2.345849 / 3.745712 (-1.399863) | 2.901281 / 5.269862 (-2.368580) | 1.777959 / 4.565676 (-2.787717) | 0.063539 / 0.424275 (-0.360736) | 0.005011 / 0.007607 (-0.002596) | 0.331212 / 0.226044 (0.105168) | 3.200379 / 2.268929 (0.931451) | 1.780766 / 55.444624 (-53.663859) | 1.517178 / 6.876477 (-5.359299) | 1.587307 / 2.142072 (-0.554765) | 0.651939 / 4.805227 (-4.153288) | 0.116646 / 6.500664 (-6.384018) | 0.043325 / 0.075469 (-0.032144) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.996894 / 1.841788 (-0.844894) | 11.495397 / 8.074308 (3.421089) | 10.255784 / 10.191392 (0.064392) | 0.129006 / 0.680424 (-0.551418) | 0.013967 / 0.534201 (-0.520234) | 0.284847 / 0.579283 (-0.294436) | 0.265610 / 0.434364 (-0.168754) | 0.320176 / 0.540337 (-0.220162) | 0.429526 / 1.386936 (-0.957410) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005582 / 0.011353 (-0.005771) | 0.003867 / 0.011008 (-0.007142) | 0.050416 / 0.038508 (0.011908) | 0.030996 / 0.023109 (0.007887) | 0.275987 / 0.275898 (0.000089) | 0.289487 / 0.323480 (-0.033993) | 0.005149 / 0.007986 (-0.002837) | 0.002806 / 0.004328 (-0.001522) | 0.049617 / 0.004250 (0.045366) | 0.046949 / 0.037052 (0.009897) | 0.281596 / 0.258489 (0.023107) | 0.330948 / 0.293841 (0.037108) | 0.049645 / 0.128546 (-0.078901) | 0.010953 / 0.075646 (-0.064693) | 0.058546 / 0.419271 (-0.360725) | 0.034010 / 0.043533 (-0.009523) | 0.270525 / 0.255139 (0.015386) | 0.289749 / 0.283200 (0.006550) | 0.018755 / 0.141683 (-0.122927) | 1.163072 / 1.452155 (-0.289082) | 1.213400 / 1.492716 (-0.279316) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092397 / 0.018006 (0.074390) | 0.299376 / 0.000490 (0.298886) | 0.000211 / 0.000200 (0.000011) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022496 / 0.037411 (-0.014916) | 0.076886 / 0.014526 (0.062361) | 0.087186 / 0.176557 (-0.089371) | 0.126092 / 0.737135 (-0.611044) | 0.088832 / 0.296338 (-0.207507) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288885 / 0.215209 (0.073676) | 2.839851 / 2.077655 (0.762196) | 1.587556 / 1.504120 (0.083436) | 1.470249 / 1.541195 (-0.070945) | 1.518080 / 1.468490 (0.049590) | 0.569646 / 4.584777 (-4.015131) | 2.417574 / 3.745712 (-1.328138) | 2.737368 / 5.269862 (-2.532494) | 1.784419 / 4.565676 (-2.781257) | 0.064104 / 0.424275 (-0.360171) | 0.005138 / 0.007607 (-0.002469) | 0.346214 / 0.226044 (0.120169) | 3.439541 / 2.268929 (1.170612) | 1.944792 / 55.444624 (-53.499832) | 1.675762 / 6.876477 (-5.200714) | 1.851871 / 2.142072 (-0.290201) | 0.652932 / 4.805227 (-4.152295) | 0.118953 / 6.500664 (-6.381711) | 0.041011 / 0.075469 (-0.034459) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.017690 / 1.841788 (-0.824098) | 12.610531 / 8.074308 (4.536223) | 11.223165 / 10.191392 (1.031773) | 0.131637 / 0.680424 (-0.548786) | 0.016733 / 0.534201 (-0.517468) | 0.288491 / 0.579283 (-0.290792) | 0.275899 / 0.434364 (-0.158465) | 0.331837 / 0.540337 (-0.208500) | 0.421695 / 1.386936 (-0.965241) |\n\n</details>\n</details>\n\n\n"
] | 2024-01-31T13:41:54Z
| 2024-01-31T14:05:04Z
| 2024-01-31T13:59:01Z
|
COLLABORATOR
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6633/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6633/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6633.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6633",
"merged_at": "2024-01-31T13:59:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6633.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6633"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6571
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6571/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6571/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6571/events
|
https://github.com/huggingface/datasets/issues/6571
| 2,072,111,000
|
I_kwDODunzps57geeY
| 6,571
|
Make DatasetDict.column_names return a list instead of dict
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[] | 2024-01-09T10:45:17Z
| 2024-01-09T10:45:17Z
| null |
MEMBER
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
Currently, `DatasetDict.column_names` returns a dict, with each split name as keys and the corresponding list of column names as values.
However, by construction, all splits have the same column names.
I think it makes more sense to return a single list with the column names, which is the same for all the split keys.
| null |
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6571/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6571/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6690
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6690/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6690/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6690/events
|
https://github.com/huggingface/datasets/issues/6690
| 2,150,800,065
|
I_kwDODunzps6AMprB
| 6,690
|
Add function to convert a script-dataset to Parquet
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null |
[] | 2024-02-23T10:28:20Z
| 2024-04-12T15:27:05Z
| 2024-04-12T15:27:05Z
|
MEMBER
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
Add function to convert a script-dataset to Parquet and push it to the Hub, analogously to the Space: "Convert a Hugging Face dataset to Parquet"
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6690/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6690/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6146
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6146/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6146/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6146/events
|
https://github.com/huggingface/datasets/issues/6146
| 1,848,417,366
|
I_kwDODunzps5uLJxW
| 6,146
|
DatasetGenerationError when load glue benchmark datasets from `load_dataset`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/78742415?v=4",
"events_url": "https://api.github.com/users/yusx-swapp/events{/privacy}",
"followers_url": "https://api.github.com/users/yusx-swapp/followers",
"following_url": "https://api.github.com/users/yusx-swapp/following{/other_user}",
"gists_url": "https://api.github.com/users/yusx-swapp/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yusx-swapp",
"id": 78742415,
"login": "yusx-swapp",
"node_id": "MDQ6VXNlcjc4NzQyNDE1",
"organizations_url": "https://api.github.com/users/yusx-swapp/orgs",
"received_events_url": "https://api.github.com/users/yusx-swapp/received_events",
"repos_url": "https://api.github.com/users/yusx-swapp/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yusx-swapp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yusx-swapp/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yusx-swapp",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"I've tried clear the .cache file, doesn't work.",
"This issue happens on AWS sagemaker",
"This issue can happen if there is a directory named \"glue\" relative to the Python script with the `load_dataset` call (similar issue to this one: https://github.com/huggingface/datasets/issues/5228). Is this the case?",
"> This issue can happen if there is a directory named \"glue\" relative to the Python script with the `load_dataset` call (similar issue to this one: #5228). Is this the case?\r\n\r\nThats correct!\r\nSorry for my late response."
] | 2023-08-13T05:17:56Z
| 2023-08-26T22:09:09Z
| 2023-08-26T22:09:09Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Package version: datasets-2.14.4
When I run the codes:
```
from datasets import load_dataset
dataset = load_dataset("glue", "ax")
```
I got the following errors:
---------------------------------------------------------------------------
SchemaInferenceError Traceback (most recent call last)
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/builder.py:1949, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1948 num_shards = shard_id + 1
-> 1949 num_examples, num_bytes = writer.finalize()
1950 writer.close()
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/arrow_writer.py:598, in ArrowWriter.finalize(self, close_stream)
597 self.stream.close()
--> 598 raise SchemaInferenceError("Please pass `features` or at least one example when writing data")
599 logger.debug(
600 f"Done writing {self._num_examples} {self.unit} in {self._num_bytes} bytes {self._path if self._path else ''}."
601 )
SchemaInferenceError: Please pass `features` or at least one example when writing data
The above exception was the direct cause of the following exception:
DatasetGenerationError Traceback (most recent call last)
Cell In[5], line 3
1 from datasets import load_dataset
----> 3 dataset = load_dataset("glue", "ax")
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/load.py:2136, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)
2133 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES
2135 # Download and prepare data
-> 2136 builder_instance.download_and_prepare(
2137 download_config=download_config,
2138 download_mode=download_mode,
2139 verification_mode=verification_mode,
2140 try_from_hf_gcs=try_from_hf_gcs,
2141 num_proc=num_proc,
2142 storage_options=storage_options,
2143 )
2145 # Build dataset for splits
2146 keep_in_memory = (
2147 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
2148 )
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/builder.py:954, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
952 if num_proc is not None:
953 prepare_split_kwargs["num_proc"] = num_proc
--> 954 self._download_and_prepare(
955 dl_manager=dl_manager,
956 verification_mode=verification_mode,
957 **prepare_split_kwargs,
958 **download_and_prepare_kwargs,
959 )
960 # Sync info
961 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/builder.py:1049, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
1045 split_dict.add(split_generator.split_info)
1047 try:
1048 # Prepare split will record examples associated to the split
-> 1049 self._prepare_split(split_generator, **prepare_split_kwargs)
1050 except OSError as e:
1051 raise OSError(
1052 "Cannot find data file. "
1053 + (self.manual_download_instructions or "")
1054 + "\nOriginal error:\n"
1055 + str(e)
1056 ) from None
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/builder.py:1813, in ArrowBasedBuilder._prepare_split(self, split_generator, file_format, num_proc, max_shard_size)
1811 job_id = 0
1812 with pbar:
-> 1813 for job_id, done, content in self._prepare_split_single(
1814 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args
1815 ):
1816 if done:
1817 result = content
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/builder.py:1958, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1956 if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
1957 e = e.__context__
-> 1958 raise DatasetGenerationError("An error occurred while generating the dataset") from e
1960 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)
DatasetGenerationError: An error occurred while generating the dataset
### Steps to reproduce the bug
from datasets import load_dataset
dataset = load_dataset("glue", "ax")
### Expected behavior
When generating the train split:
Generating train split:
0/0 [00:00<?, ? examples/s]
It raise the error:
DatasetGenerationError: An error occurred while generating the dataset
### Environment info
datasets-2.14.4.
Python 3.10
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/78742415?v=4",
"events_url": "https://api.github.com/users/yusx-swapp/events{/privacy}",
"followers_url": "https://api.github.com/users/yusx-swapp/followers",
"following_url": "https://api.github.com/users/yusx-swapp/following{/other_user}",
"gists_url": "https://api.github.com/users/yusx-swapp/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yusx-swapp",
"id": 78742415,
"login": "yusx-swapp",
"node_id": "MDQ6VXNlcjc4NzQyNDE1",
"organizations_url": "https://api.github.com/users/yusx-swapp/orgs",
"received_events_url": "https://api.github.com/users/yusx-swapp/received_events",
"repos_url": "https://api.github.com/users/yusx-swapp/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yusx-swapp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yusx-swapp/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yusx-swapp",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6146/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6146/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6991
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6991/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6991/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6991/events
|
https://github.com/huggingface/datasets/pull/6991
| 2,367,711,094
|
PR_kwDODunzps5zPoQs
| 6,991
|
Unblock NumPy 2.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/730137?v=4",
"events_url": "https://api.github.com/users/NeilGirdhar/events{/privacy}",
"followers_url": "https://api.github.com/users/NeilGirdhar/followers",
"following_url": "https://api.github.com/users/NeilGirdhar/following{/other_user}",
"gists_url": "https://api.github.com/users/NeilGirdhar/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/NeilGirdhar",
"id": 730137,
"login": "NeilGirdhar",
"node_id": "MDQ6VXNlcjczMDEzNw==",
"organizations_url": "https://api.github.com/users/NeilGirdhar/orgs",
"received_events_url": "https://api.github.com/users/NeilGirdhar/received_events",
"repos_url": "https://api.github.com/users/NeilGirdhar/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/NeilGirdhar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NeilGirdhar/subscriptions",
"type": "User",
"url": "https://api.github.com/users/NeilGirdhar",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6991). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@albertvillanova Any chance we could get this in before the next release? Everything depending on HuggingFace has their NumPy upgrade blocked.",
"The incompatible libraries are:\r\n- faiss-cpu 1.8.0.post1 requires numpy<2.0,>=1.0, but you have numpy 2.0.0 which is incompatible.\r\n- tensorflow 2.16.2 requires numpy<2.0.0,>=1.23.5; python_version <= \"3.11\", but you have numpy 2.0.0 which is incompatible.\r\n- transformers 4.42.3 requires numpy<2.0,>=1.17, but you have numpy 2.0.0 which is incompatible.",
"Why is it installing numpy 2 if the dependencies don't support it?",
"For me, I'm getting:\r\n```\r\n❯ uv pip install --system \"datasets[tests] @ .\"\r\nFound existing alias for \"uv pip install\". You should use: \"pipi\"\r\nResolved 119 packages in 934ms\r\n Built datasets @ file:///Users/neil/src/datasets\r\nPrepared 1 package in 1.28s\r\nUninstalled 1 package in 10ms\r\nInstalled 2 packages in 17ms\r\n - datasets==2.20.1.dev0 (from file:///Users/neil/src/datasets)\r\n + datasets==2.20.1.dev0 (from file:///Users/neil/src/datasets)\r\n + numpy==1.26.4\r\n```",
"Which version on Python do you have?",
"3.12.4 I'll try on 3.10 now.",
"Please, note that I obtained the previous incompatible libraries in my local environment, by forcing the update of numpy.",
"In the Python 3.10 CI, the situation is different:\r\n- for example, they install an older version of tensorflow (2.14.0), where probably the constraint on numpy was not yet implemented. See the details: https://github.com/huggingface/datasets/actions/runs/9879100332/job/27306903343?pr=6991\r\n```\r\n> uv pip install --system \"datasets[tests] @ .\"\r\n...\r\n + faiss-cpu==1.8.0\r\n...\r\n + numpy==2.0.0\r\n...\r\n + tensorflow==2.14.0\r\n```\r\n\r\nSee, CI installs:\r\n- faiss-cpu 1.8.0 instead of 1.8.0.post1\r\n- tensorflow 2.14.0 instead of 2.16.2\r\n- transformers 4.41.2 instead of 4.42.3",
"~~The main point is that we cannot support numpy 2.0 until tensorflow and faiss do.~~\r\n\r\nAlternatively, we should ignore/select tests depending on the installed versions.",
"> Alternatively, we should ignore/select tests depending on the installed versions.\r\n\r\nThat works.\r\n\r\nAlternatively, you could depend on tensorflow >= 2.16.2 (etc.) for the tests?",
"Yes, I was thinking of a workaround solution.\r\n\r\nThe issue I see is that our CI will not test numpy 2.0 indeed.",
"> The issue I see is that our CI will not test numpy 2.0 indeed.\r\n\r\nRight, that's the advantage of the test skipping you wanted, I see your point.\r\n\r\nThing is, it won't be long before tensorflow supports numpy 2.0, and then the situation is resolved and your tests test numpy 2.0. Do you really want to invest a lot of effort into testing numpy 2.0 for a few months benefit?",
"Without testing Numpy 2.0, we do not know if there are some other parts in the code broken.",
"> Without testing Numpy 2.0, we do not know if there are some other parts in the code broken.\r\n\r\nYes, you're right. I understand you're point, but you could say this for anything that your test dependencies don't support.\r\n\r\nI guess the solution is to write tests that don't depend on tensorflow, etc., but still use numpy. You could write some Jax tests for example.\r\n\r\nThat said, blocking numpy 2 isn't a good solution in my opinion. These dependencies are extremely late in supporting Numpy 2. They were supposed to be testing against preview releases over three months ago. I don't think the world should have to wait for them.",
"> I guess the solution is to write tests that don't depend on tensorflow, etc., but still use numpy.\r\nThat is my point. What we cannot do is just blindly support Numpy 2.0 without knowing its consequences. We need to test it:\r\n- to know if our core code works with it\r\n- to know what optional libraries are incompatible\r\n\r\nFor example, while testing locally, I have discovered that librosa is also incompatible with numpy-2.0, due to its dependency on soxr:\r\n- https://github.com/dofuuz/python-soxr/issues/28",
"While testing locally, I have also discovered that pytorch does not support Numpy 2.0 on Windows platforms:\r\n- https://github.com/pytorch/pytorch/issues/128860",
"I am adding Numpy 2.0 tests to your PR if you don't mind, before merging this PR.",
"Awesome, thank you! Please let me know if I need to do anything.",
"Now we test numpy 2.0 in the `test_py310_numpy2` CI tests: https://github.com/huggingface/datasets/actions/runs/9907254874/job/27370545495?pr=6991\r\n```\r\n + numpy==2.0.0\r\n```",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005709 / 0.011353 (-0.005643) | 0.003947 / 0.011008 (-0.007061) | 0.064407 / 0.038508 (0.025899) | 0.029903 / 0.023109 (0.006794) | 0.244838 / 0.275898 (-0.031060) | 0.268894 / 0.323480 (-0.054586) | 0.003200 / 0.007986 (-0.004786) | 0.002867 / 0.004328 (-0.001461) | 0.050016 / 0.004250 (0.045765) | 0.047682 / 0.037052 (0.010629) | 0.252186 / 0.258489 (-0.006303) | 0.292050 / 0.293841 (-0.001791) | 0.030277 / 0.128546 (-0.098270) | 0.012283 / 0.075646 (-0.063364) | 0.205875 / 0.419271 (-0.213397) | 0.037202 / 0.043533 (-0.006331) | 0.246045 / 0.255139 (-0.009094) | 0.272422 / 0.283200 (-0.010777) | 0.020572 / 0.141683 (-0.121111) | 1.114343 / 1.452155 (-0.337812) | 1.169909 / 1.492716 (-0.322808) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096612 / 0.018006 (0.078605) | 0.303025 / 0.000490 (0.302535) | 0.000210 / 0.000200 (0.000010) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019292 / 0.037411 (-0.018119) | 0.062548 / 0.014526 (0.048023) | 0.076027 / 0.176557 (-0.100530) | 0.121752 / 0.737135 (-0.615383) | 0.076608 / 0.296338 (-0.219730) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283900 / 0.215209 (0.068691) | 2.829829 / 2.077655 (0.752174) | 1.428934 / 1.504120 (-0.075186) | 1.316796 / 1.541195 (-0.224399) | 1.330012 / 1.468490 (-0.138478) | 0.702245 / 4.584777 (-3.882532) | 2.380454 / 3.745712 (-1.365259) | 2.882881 / 5.269862 (-2.386980) | 1.920345 / 4.565676 (-2.645332) | 0.077860 / 0.424275 (-0.346415) | 0.005295 / 0.007607 (-0.002312) | 0.336968 / 0.226044 (0.110924) | 3.327808 / 2.268929 (1.058879) | 1.781958 / 55.444624 (-53.662666) | 1.489412 / 6.876477 (-5.387065) | 1.634829 / 2.142072 (-0.507243) | 0.787985 / 4.805227 (-4.017243) | 0.134397 / 6.500664 (-6.366267) | 0.042906 / 0.075469 (-0.032563) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.967647 / 1.841788 (-0.874141) | 11.714541 / 8.074308 (3.640233) | 9.350228 / 10.191392 (-0.841164) | 0.142675 / 0.680424 (-0.537749) | 0.014609 / 0.534201 (-0.519592) | 0.301970 / 0.579283 (-0.277314) | 0.262350 / 0.434364 (-0.172014) | 0.342933 / 0.540337 (-0.197404) | 0.437321 / 1.386936 (-0.949615) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005622 / 0.011353 (-0.005731) | 0.003958 / 0.011008 (-0.007050) | 0.050667 / 0.038508 (0.012159) | 0.032842 / 0.023109 (0.009733) | 0.252292 / 0.275898 (-0.023606) | 0.280602 / 0.323480 (-0.042878) | 0.004313 / 0.007986 (-0.003673) | 0.002870 / 0.004328 (-0.001458) | 0.049549 / 0.004250 (0.045299) | 0.040448 / 0.037052 (0.003396) | 0.270264 / 0.258489 (0.011775) | 0.302988 / 0.293841 (0.009147) | 0.030840 / 0.128546 (-0.097707) | 0.012131 / 0.075646 (-0.063515) | 0.060061 / 0.419271 (-0.359211) | 0.033025 / 0.043533 (-0.010507) | 0.251909 / 0.255139 (-0.003230) | 0.275511 / 0.283200 (-0.007689) | 0.018399 / 0.141683 (-0.123284) | 1.160744 / 1.452155 (-0.291411) | 1.188265 / 1.492716 (-0.304452) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097719 / 0.018006 (0.079712) | 0.304389 / 0.000490 (0.303899) | 0.000217 / 0.000200 (0.000017) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022964 / 0.037411 (-0.014447) | 0.076897 / 0.014526 (0.062372) | 0.088930 / 0.176557 (-0.087626) | 0.128926 / 0.737135 (-0.608209) | 0.091049 / 0.296338 (-0.205290) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285670 / 0.215209 (0.070461) | 2.806071 / 2.077655 (0.728416) | 1.527161 / 1.504120 (0.023041) | 1.410291 / 1.541195 (-0.130903) | 1.427071 / 1.468490 (-0.041419) | 0.705527 / 4.584777 (-3.879250) | 0.926915 / 3.745712 (-2.818797) | 2.893078 / 5.269862 (-2.376784) | 1.907113 / 4.565676 (-2.658564) | 0.077326 / 0.424275 (-0.346949) | 0.005182 / 0.007607 (-0.002425) | 0.332282 / 0.226044 (0.106237) | 3.312889 / 2.268929 (1.043960) | 1.853839 / 55.444624 (-53.590785) | 1.592013 / 6.876477 (-5.284464) | 1.620234 / 2.142072 (-0.521838) | 0.776894 / 4.805227 (-4.028333) | 0.132411 / 6.500664 (-6.368253) | 0.041430 / 0.075469 (-0.034039) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.003468 / 1.841788 (-0.838320) | 12.472251 / 8.074308 (4.397943) | 10.603243 / 10.191392 (0.411851) | 0.132561 / 0.680424 (-0.547863) | 0.015790 / 0.534201 (-0.518411) | 0.306724 / 0.579283 (-0.272559) | 0.125812 / 0.434364 (-0.308552) | 0.343782 / 0.540337 (-0.196555) | 0.445915 / 1.386936 (-0.941021) |\n\n</details>\n</details>\n\n\n"
] | 2024-06-22T09:19:53Z
| 2024-12-25T17:57:34Z
| 2024-07-12T12:04:53Z
|
CONTRIBUTOR
| null | null | null |
Fixes https://github.com/huggingface/datasets/issues/6980
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6991/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6991/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6991.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6991",
"merged_at": "2024-07-12T12:04:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6991.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6991"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4535
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4535/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4535/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4535/events
|
https://github.com/huggingface/datasets/pull/4535
| 1,278,365,039
|
PR_kwDODunzps46BnXq
| 4,535
|
Add `batch_size` parameter when calling `add_faiss_index` and `add_faiss_index_from_external_arrays`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Also, I had a doubt while checking the code related to the indices... \r\n\r\n@lhoestq, there's a value in `config.py` named `DATASET_INDICES_FILENAME` which has the arrow extension (which I assume it should be `indices.faiss`, as the Elastic Search indices are not stored in a file, but not sure), and it's just used before actually saving an `ArrowDataset` in disk, but since those indices are never stored AFAIK, is that actually required?\r\n\r\nhttps://github.com/huggingface/datasets/blob/aec86ea4b790ccccc9b2e0376a496728b1c914cc/src/datasets/config.py#L183\r\n\r\nhttps://github.com/huggingface/datasets/blob/aec86ea4b790ccccc9b2e0376a496728b1c914cc/src/datasets/arrow_dataset.py#L1079-L1092\r\n\r\nSo should I also remove that?\r\n\r\nP.S. I also edited the following code comment which I found misleading as it's not actually storing the indices.\r\n\r\nhttps://github.com/huggingface/datasets/blob/8ddc4bbeb1e2bd307b21f5d21f884649aa2bf640/src/datasets/arrow_dataset.py#L1122",
"_The documentation is not available anymore as the PR was closed or merged._",
"> @lhoestq, there's a value in config.py named DATASET_INDICES_FILENAME which has the arrow extension (which I assume it should be indices.faiss, as the Elastic Search indices are not stored in a file, but not sure), and it's just used before actually saving an ArrowDataset in disk, but since those indices are never stored AFAIK, is that actually required?\r\n\r\nThe arrow file is used to store an indices mapping (when you shuffle the dataset for example) - not for a faiss index ;)",
"Ok cool thanks a lot for the explanation @lhoestq I was not sure about that :+1: I'll also add it there as you suggested!",
"CI failures are unrelated to this PR and fixed on master, merging"
] | 2022-06-21T12:18:49Z
| 2022-06-27T16:25:09Z
| 2022-06-27T16:14:36Z
|
MEMBER
| null | null | null |
Currently, even though the `batch_size` when adding vectors to the FAISS index can be tweaked in `FaissIndex.add_vectors()`, the function `ArrowDataset.add_faiss_index` doesn't have either the parameter `batch_size` to be propagated to the nested `FaissIndex.add_vectors` function or `*args, **kwargs`, so on, this PR adds the `batch_size` parameter to both `ArrowDataset.add_faiss_index` and `ArrowDataset.add_faiss_index_from_external_arrays`.
This is useful so as to tweak the `batch_size` according to the VM specifications.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4535/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4535/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4535.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4535",
"merged_at": "2022-06-27T16:14:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4535.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4535"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4998
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4998/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4998/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4998/events
|
https://github.com/huggingface/datasets/pull/4998
| 1,379,466,717
|
PR_kwDODunzps4_Ryp3
| 4,998
|
Don't add a tag on the Hub on release
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-20T13:54:57Z
| 2022-09-20T14:11:46Z
| 2022-09-20T14:08:54Z
|
MEMBER
| null | null | null |
Datasets with no namespace on the Hub have tags to redirect to the version of datasets where they come from.
I’m about to remove them all because I think it looks bad/unexpected in the UI and it’s not actually useful
Therefore I'm also disabling tagging.
Note that the CI job will be completely removed in https://github.com/huggingface/datasets/pull/4974 anyway
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4998/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4998/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4998.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4998",
"merged_at": "2022-09-20T14:08:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4998.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4998"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5028
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5028/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5028/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5028/events
|
https://github.com/huggingface/datasets/issues/5028
| 1,386,272,533
|
I_kwDODunzps5SoNcV
| 5,028
|
passing parameters to the method passed to Dataset.from_generator()
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/64276129?v=4",
"events_url": "https://api.github.com/users/Basir-mahmood/events{/privacy}",
"followers_url": "https://api.github.com/users/Basir-mahmood/followers",
"following_url": "https://api.github.com/users/Basir-mahmood/following{/other_user}",
"gists_url": "https://api.github.com/users/Basir-mahmood/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Basir-mahmood",
"id": 64276129,
"login": "Basir-mahmood",
"node_id": "MDQ6VXNlcjY0Mjc2MTI5",
"organizations_url": "https://api.github.com/users/Basir-mahmood/orgs",
"received_events_url": "https://api.github.com/users/Basir-mahmood/received_events",
"repos_url": "https://api.github.com/users/Basir-mahmood/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Basir-mahmood/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Basir-mahmood/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Basir-mahmood",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] | null |
[
"Hi! Yes, you can either use the `gen_kwargs` param in `Dataset.from_generator` (`ds = Dataset.from_generator(gen, gen_kwargs={\"param1\": val})`) or wrap the generator function with `functools.partial`\r\n(`ds = Dataset.from_generator(functools.partial(gen, param1=\"val\"))`) to pass custom parameters to it.\r\n"
] | 2022-09-26T15:20:06Z
| 2022-10-03T13:00:00Z
| 2022-10-03T13:00:00Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
Big thanks for providing dataset creation via a generator.
I want to ask whether there is any way that parameters can be passed to the method Dataset.from_generator() method, like as follows.
```
from datasets import Dataset
def gen(param1):
for idx in len(custom_dataset):
yield custom_dataset[idx] + param1
ds = Dataset.from_generator(gen(param1))
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5028/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5028/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/4963
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4963/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4963/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4963/events
|
https://github.com/huggingface/datasets/issues/4963
| 1,368,201,188
|
I_kwDODunzps5RjRfk
| 4,963
|
Dataset without script does not support regular JSON data file
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/julien-c",
"id": 326577,
"login": "julien-c",
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"repos_url": "https://api.github.com/users/julien-c/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"type": "User",
"url": "https://api.github.com/users/julien-c",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi @julien-c,\r\n\r\nOut of the box, we only support JSON lines (NDJSON) data files, but your data file is a regular JSON file. The reason is we use `pyarrow.json.read_json` and this only supports line-delimited JSON. "
] | 2022-09-09T18:45:33Z
| 2022-09-20T15:40:07Z
| 2022-09-20T15:40:07Z
|
MEMBER
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Link
https://huggingface.co/datasets/julien-c/label-studio-my-dogs
### Description
<img width="1115" alt="image" src="https://user-images.githubusercontent.com/326577/189422048-7e9c390f-bea7-4521-a232-43f049ccbd1f.png">
### Owner
Yes
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4963/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4963/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6735
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6735/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6735/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6735/events
|
https://github.com/huggingface/datasets/pull/6735
| 2,189,132,932
|
PR_kwDODunzps5px84g
| 6,735
|
Add `mode` parameter to `Image` feature
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6735). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005009 / 0.011353 (-0.006344) | 0.003547 / 0.011008 (-0.007461) | 0.063014 / 0.038508 (0.024506) | 0.027699 / 0.023109 (0.004589) | 0.247140 / 0.275898 (-0.028758) | 0.273610 / 0.323480 (-0.049870) | 0.003115 / 0.007986 (-0.004871) | 0.002712 / 0.004328 (-0.001616) | 0.049134 / 0.004250 (0.044883) | 0.041582 / 0.037052 (0.004530) | 0.269992 / 0.258489 (0.011503) | 0.294516 / 0.293841 (0.000675) | 0.027818 / 0.128546 (-0.100728) | 0.010568 / 0.075646 (-0.065078) | 0.207710 / 0.419271 (-0.211561) | 0.035767 / 0.043533 (-0.007766) | 0.260058 / 0.255139 (0.004919) | 0.277615 / 0.283200 (-0.005585) | 0.020192 / 0.141683 (-0.121491) | 1.116863 / 1.452155 (-0.335292) | 1.156868 / 1.492716 (-0.335848) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095087 / 0.018006 (0.077081) | 0.303249 / 0.000490 (0.302759) | 0.000215 / 0.000200 (0.000015) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018866 / 0.037411 (-0.018545) | 0.063853 / 0.014526 (0.049328) | 0.073863 / 0.176557 (-0.102693) | 0.121399 / 0.737135 (-0.615737) | 0.076014 / 0.296338 (-0.220325) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289843 / 0.215209 (0.074634) | 2.844085 / 2.077655 (0.766431) | 1.528022 / 1.504120 (0.023902) | 1.397352 / 1.541195 (-0.143843) | 1.394676 / 1.468490 (-0.073814) | 0.555899 / 4.584777 (-4.028878) | 2.354010 / 3.745712 (-1.391702) | 2.737715 / 5.269862 (-2.532146) | 1.731260 / 4.565676 (-2.834416) | 0.062315 / 0.424275 (-0.361960) | 0.004920 / 0.007607 (-0.002687) | 0.342921 / 0.226044 (0.116877) | 3.416529 / 2.268929 (1.147600) | 1.862941 / 55.444624 (-53.581684) | 1.599661 / 6.876477 (-5.276816) | 1.617200 / 2.142072 (-0.524873) | 0.635129 / 4.805227 (-4.170099) | 0.121651 / 6.500664 (-6.379013) | 0.041867 / 0.075469 (-0.033602) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.990825 / 1.841788 (-0.850962) | 11.435576 / 8.074308 (3.361268) | 9.490194 / 10.191392 (-0.701198) | 0.133295 / 0.680424 (-0.547129) | 0.014061 / 0.534201 (-0.520140) | 0.288648 / 0.579283 (-0.290635) | 0.268874 / 0.434364 (-0.165490) | 0.323288 / 0.540337 (-0.217049) | 0.426090 / 1.386936 (-0.960846) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006712 / 0.011353 (-0.004641) | 0.003723 / 0.011008 (-0.007285) | 0.049814 / 0.038508 (0.011306) | 0.039323 / 0.023109 (0.016213) | 0.279244 / 0.275898 (0.003346) | 0.297139 / 0.323480 (-0.026341) | 0.004197 / 0.007986 (-0.003788) | 0.002753 / 0.004328 (-0.001576) | 0.048820 / 0.004250 (0.044569) | 0.049593 / 0.037052 (0.012541) | 0.287247 / 0.258489 (0.028758) | 0.338078 / 0.293841 (0.044237) | 0.029303 / 0.128546 (-0.099243) | 0.010292 / 0.075646 (-0.065354) | 0.057852 / 0.419271 (-0.361419) | 0.053390 / 0.043533 (0.009857) | 0.275155 / 0.255139 (0.020016) | 0.292891 / 0.283200 (0.009692) | 0.020007 / 0.141683 (-0.121676) | 1.161731 / 1.452155 (-0.290424) | 1.232162 / 1.492716 (-0.260555) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092848 / 0.018006 (0.074842) | 0.301180 / 0.000490 (0.300690) | 0.000236 / 0.000200 (0.000036) | 0.000050 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022477 / 0.037411 (-0.014934) | 0.077012 / 0.014526 (0.062486) | 0.087335 / 0.176557 (-0.089222) | 0.126761 / 0.737135 (-0.610374) | 0.089249 / 0.296338 (-0.207090) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.290722 / 0.215209 (0.075513) | 2.884485 / 2.077655 (0.806830) | 1.565775 / 1.504120 (0.061656) | 1.442369 / 1.541195 (-0.098825) | 1.453995 / 1.468490 (-0.014495) | 0.563193 / 4.584777 (-4.021584) | 2.413610 / 3.745712 (-1.332102) | 2.684567 / 5.269862 (-2.585295) | 1.753322 / 4.565676 (-2.812354) | 0.061879 / 0.424275 (-0.362396) | 0.005080 / 0.007607 (-0.002527) | 0.347274 / 0.226044 (0.121229) | 3.435836 / 2.268929 (1.166907) | 1.937893 / 55.444624 (-53.506731) | 1.657824 / 6.876477 (-5.218653) | 1.777767 / 2.142072 (-0.364305) | 0.656757 / 4.805227 (-4.148471) | 0.117144 / 6.500664 (-6.383520) | 0.040691 / 0.075469 (-0.034778) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.012435 / 1.841788 (-0.829353) | 12.038001 / 8.074308 (3.963693) | 10.363947 / 10.191392 (0.172555) | 0.140711 / 0.680424 (-0.539713) | 0.014937 / 0.534201 (-0.519264) | 0.291070 / 0.579283 (-0.288213) | 0.277180 / 0.434364 (-0.157184) | 0.327433 / 0.540337 (-0.212904) | 0.439767 / 1.386936 (-0.947169) |\n\n</details>\n</details>\n\n\n"
] | 2024-03-15T17:21:12Z
| 2024-03-18T15:47:48Z
| 2024-03-18T15:41:33Z
|
COLLABORATOR
| null | null | null |
Fix https://github.com/huggingface/datasets/issues/6675
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6735/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6735/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6735.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6735",
"merged_at": "2024-03-18T15:41:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6735.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6735"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6823
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6823/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6823/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6823/events
|
https://github.com/huggingface/datasets/issues/6823
| 2,250,775,569
|
I_kwDODunzps6GKBwR
| 6,823
|
Loading problems of Datasets with a single shard
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/60151338?v=4",
"events_url": "https://api.github.com/users/andjoer/events{/privacy}",
"followers_url": "https://api.github.com/users/andjoer/followers",
"following_url": "https://api.github.com/users/andjoer/following{/other_user}",
"gists_url": "https://api.github.com/users/andjoer/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/andjoer",
"id": 60151338,
"login": "andjoer",
"node_id": "MDQ6VXNlcjYwMTUxMzM4",
"organizations_url": "https://api.github.com/users/andjoer/orgs",
"received_events_url": "https://api.github.com/users/andjoer/received_events",
"repos_url": "https://api.github.com/users/andjoer/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/andjoer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andjoer/subscriptions",
"type": "User",
"url": "https://api.github.com/users/andjoer",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Has there been a PR to resolve this already?",
"The problem rises from using a wrong api.\r\nWhen loading a save_to_disk dataset, **load_from_disk** (instead of load_dataset) is what should be used.\r\n\r\n```python\r\nfrom datasets import load_from_disk\r\n\r\ndst.save_to_disk(\"cache\")\r\ndst = load_from_disk(\"cache\")\r\n```"
] | 2024-04-18T13:59:00Z
| 2024-11-25T05:40:09Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
When saving a dataset on disk and it has a single shard it is not loaded as when it is saved in multiple shards. I installed the latest version of datasets via pip.
### Steps to reproduce the bug
The code below reproduces the behavior. All works well when the range of the loop is 10000 but it fails when it is 1000.
```
from PIL import Image
import numpy as np
from datasets import Dataset, DatasetDict, load_dataset
def load_image():
# Generate random noise image
noise = np.random.randint(0, 256, (256, 256, 3), dtype=np.uint8)
return Image.fromarray(noise)
def create_dataset():
input_images = []
output_images = []
text_prompts = []
for _ in range(10000): # this is the problematic parameter
input_images.append(load_image())
output_images.append(load_image())
text_prompts.append('test prompt')
data = {'input_image': input_images, 'output_image': output_images, 'text_prompt': text_prompts}
dataset = Dataset.from_dict(data)
return DatasetDict({'train': dataset})
dataset = create_dataset()
print('dataset before saving')
print(dataset)
print(dataset['train'].column_names)
dataset.save_to_disk('test_ds')
print('dataset after loading')
dataset_loaded = load_dataset('test_ds')
print(dataset_loaded)
print(dataset_loaded['train'].column_names)
```
The output for 1000 iterations is:
```
dataset before saving
DatasetDict({
train: Dataset({
features: ['input_image', 'output_image', 'text_prompt'],
num_rows: 1000
})
})
['input_image', 'output_image', 'text_prompt']
Saving the dataset (1/1 shards): 100%|█| 1000/1000 [00:00<00:00, 5156.00 example
dataset after loading
Generating train split: 1 examples [00:00, 230.52 examples/s]
DatasetDict({
train: Dataset({
features: ['_data_files', '_fingerprint', '_format_columns', '_format_kwargs', '_format_type', '_output_all_columns', '_split'],
num_rows: 1
})
})
['_data_files', '_fingerprint', '_format_columns', '_format_kwargs', '_format_type', '_output_all_columns', '_split']
```
For 10000 iteration (8 shards) it is correct:
```
dataset before saving
DatasetDict({
train: Dataset({
features: ['input_image', 'output_image', 'text_prompt'],
num_rows: 10000
})
})
['input_image', 'output_image', 'text_prompt']
Saving the dataset (8/8 shards): 100%|█| 10000/10000 [00:01<00:00, 6237.68 examp
dataset after loading
Generating train split: 10000 examples [00:00, 10773.16 examples/s]
DatasetDict({
train: Dataset({
features: ['input_image', 'output_image', 'text_prompt'],
num_rows: 10000
})
})
['input_image', 'output_image', 'text_prompt']
```
### Expected behavior
The procedure should work for a dataset with one shrad the same as for one with multiple shards
### Environment info
- `datasets` version: 2.18.0
- Platform: macOS-14.1-arm64-arm-64bit
- Python version: 3.11.8
- `huggingface_hub` version: 0.22.2
- PyArrow version: 15.0.2
- Pandas version: 2.2.2
- `fsspec` version: 2024.2.0
Edit: I looked in the source code of load.py in datasets. I should have used "load_from_disk" and it indeed works that way. But ideally load_dataset would have raisen an error the same way as if I call a path:
```
if Path(path, config.DATASET_STATE_JSON_FILENAME).exists():
raise ValueError(
"You are trying to load a dataset that was saved using `save_to_disk`. "
"Please use `load_from_disk` instead."
)
```
nevertheless I find it interesting that it works just well and without a warning if there are multiple shards.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6823/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6823/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6190
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6190/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6190/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6190/events
|
https://github.com/huggingface/datasets/issues/6190
| 1,871,582,175
|
I_kwDODunzps5vjhPf
| 6,190
|
`Invalid user token` even when correct user token is passed!
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/18682411?v=4",
"events_url": "https://api.github.com/users/Vaibhavs10/events{/privacy}",
"followers_url": "https://api.github.com/users/Vaibhavs10/followers",
"following_url": "https://api.github.com/users/Vaibhavs10/following{/other_user}",
"gists_url": "https://api.github.com/users/Vaibhavs10/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Vaibhavs10",
"id": 18682411,
"login": "Vaibhavs10",
"node_id": "MDQ6VXNlcjE4NjgyNDEx",
"organizations_url": "https://api.github.com/users/Vaibhavs10/orgs",
"received_events_url": "https://api.github.com/users/Vaibhavs10/received_events",
"repos_url": "https://api.github.com/users/Vaibhavs10/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Vaibhavs10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Vaibhavs10/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Vaibhavs10",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"This is because `download_config.use_auth_token` is deprecated - you should use `download_config.token` instead",
"Works! Thanks for the quick fix! <3"
] | 2023-08-29T12:37:03Z
| 2023-08-29T13:01:10Z
| 2023-08-29T13:01:09Z
|
MEMBER
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I'm working on a dataset which comprises other datasets on the hub.
URL: https://huggingface.co/datasets/open-asr-leaderboard/datasets-test-only
Note: Some of the sub-datasets in this metadataset require explicit access.
All the other datasets work fine, except, `common_voice`.
### Steps to reproduce the bug
https://github.com/Vaibhavs10/scratchpad/blob/main/cv_datasets_bug_repro.ipynb
### Expected behavior
It should work if the provided access token is valid (as it does for all the other datasets)
### Environment info
datasets version -> 2.14.4
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/18682411?v=4",
"events_url": "https://api.github.com/users/Vaibhavs10/events{/privacy}",
"followers_url": "https://api.github.com/users/Vaibhavs10/followers",
"following_url": "https://api.github.com/users/Vaibhavs10/following{/other_user}",
"gists_url": "https://api.github.com/users/Vaibhavs10/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Vaibhavs10",
"id": 18682411,
"login": "Vaibhavs10",
"node_id": "MDQ6VXNlcjE4NjgyNDEx",
"organizations_url": "https://api.github.com/users/Vaibhavs10/orgs",
"received_events_url": "https://api.github.com/users/Vaibhavs10/received_events",
"repos_url": "https://api.github.com/users/Vaibhavs10/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Vaibhavs10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Vaibhavs10/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Vaibhavs10",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6190/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6190/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6053
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6053/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6053/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6053/events
|
https://github.com/huggingface/datasets/issues/6053
| 1,812,635,902
|
I_kwDODunzps5sCqD-
| 6,053
|
Change package name from "datasets" to something less generic
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2124157?v=4",
"events_url": "https://api.github.com/users/jack-jjm/events{/privacy}",
"followers_url": "https://api.github.com/users/jack-jjm/followers",
"following_url": "https://api.github.com/users/jack-jjm/following{/other_user}",
"gists_url": "https://api.github.com/users/jack-jjm/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jack-jjm",
"id": 2124157,
"login": "jack-jjm",
"node_id": "MDQ6VXNlcjIxMjQxNTc=",
"organizations_url": "https://api.github.com/users/jack-jjm/orgs",
"received_events_url": "https://api.github.com/users/jack-jjm/received_events",
"repos_url": "https://api.github.com/users/jack-jjm/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jack-jjm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jack-jjm/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jack-jjm",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] | null |
[
"This would break a lot of existing code, so we can't really do this.",
"I encountered this issue while working on a large project with 6+ years history. We have a submodule named datasets in the backend, and face a big challenge incorporating huggingface datasets into the project, especially considering django app renaming and other issues.\r\nIt would be nice if the authors at least provide a recipe on how to avoid name conflict in this situation."
] | 2023-07-19T19:53:28Z
| 2024-11-20T21:22:36Z
| 2023-10-03T16:04:09Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
I'm repeatedly finding myself in situations where I want to have a package called `datasets.py` or `evaluate.py` in my code and can't because those names are being taken up by Huggingface packages. While I can understand how (even from the user's perspective) it's aesthetically pleasing to have nice terse library names, ultimately a library hogging simple names like this is something I find short-sighted, impractical and at my most irritable, frankly rude.
My preference would be a pattern like what you get with all the other big libraries like numpy or pandas:
```
import huggingface as hf
# hf.transformers, hf.datasets, hf.evaluate
```
or things like
```
import huggingface.transformers as tf
# tf.load_model(), etc
```
If this isn't possible for some technical reason, at least just call the packages something like `hf_transformers` and so on.
I realize this is a very big change that's probably been discussed internally already, but I'm making this issue and sister issues on each huggingface project just to start the conversation and begin tracking community feeling on the matter, since I suspect I'm not the only one who feels like this.
Sorry if this has been requested already on this issue tracker, I couldn't find anything looking for terms like "package name".
Sister issues:
- [transformers](https://github.com/huggingface/transformers/issues/24934)
- **datasets**
- [evaluate](https://github.com/huggingface/evaluate/issues/476)
### Motivation
Not taking up package names the user is likely to want to use.
### Your contribution
No - more a matter of internal discussion among core library authors.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 7,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 7,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6053/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6053/timeline
| null |
not_planned
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6506
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6506/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6506/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6506/events
|
https://github.com/huggingface/datasets/issues/6506
| 2,044,975,038
|
I_kwDODunzps5549e-
| 6,506
|
Incorrect test set labels for RTE and CoLA datasets via load_dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/73316684?v=4",
"events_url": "https://api.github.com/users/emreonal11/events{/privacy}",
"followers_url": "https://api.github.com/users/emreonal11/followers",
"following_url": "https://api.github.com/users/emreonal11/following{/other_user}",
"gists_url": "https://api.github.com/users/emreonal11/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/emreonal11",
"id": 73316684,
"login": "emreonal11",
"node_id": "MDQ6VXNlcjczMzE2Njg0",
"organizations_url": "https://api.github.com/users/emreonal11/orgs",
"received_events_url": "https://api.github.com/users/emreonal11/received_events",
"repos_url": "https://api.github.com/users/emreonal11/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/emreonal11/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emreonal11/subscriptions",
"type": "User",
"url": "https://api.github.com/users/emreonal11",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"As this is a specific issue of the \"glue\" dataset, I have transferred it to the dataset Discussion page: https://huggingface.co/datasets/glue/discussions/15\r\n\r\nLet's continue the discussion there!"
] | 2023-12-16T22:06:08Z
| 2023-12-21T09:57:57Z
| 2023-12-21T09:57:57Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
The test set labels for the RTE and CoLA datasets when loading via datasets load_dataset are all -1.
Edit: It appears this is also the case for every other dataset except for MRPC (stsb, sst2, qqp, mnli (both matched and mismatched), qnli, wnli, ax). Is this intended behavior to safeguard the test set for evaluation purposes?
### Steps to reproduce the bug
!pip install datasets
from datasets import load_dataset
rte_data = load_dataset('glue', 'rte')
cola_data = load_dataset('glue', 'cola')
print(rte_data['test'][0:30]['label'])
print(cola_data['test'][0:30]['label'])
Output:
[-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1]
[-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1]
The non-label test data seems to be fine:
e.g. rte_data['test'][1] is:
{'sentence1': "Authorities in Brazil say that more than 200 people are being held hostage in a prison in the country's remote, Amazonian-jungle state of Rondonia.",
'sentence2': 'Authorities in Brazil hold 200 people as hostage.',
'label': -1,
'idx': 1}
Training and validation data are also fine:
e.g. rte_data['train][0] is:
{'sentence1': 'No Weapons of Mass Destruction Found in Iraq Yet.',
'sentence2': 'Weapons of Mass Destruction Found in Iraq.',
'label': 1,
'idx': 0}
### Expected behavior
Expected the labels to be binary 0/1 values; Got all -1s instead
### Environment info
- `datasets` version: 2.15.0
- Platform: Linux-6.1.58+-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.19.4
- PyArrow version: 10.0.1
- Pandas version: 1.5.3
- `fsspec` version: 2023.6.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6506/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6506/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/4566
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4566/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4566/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4566/events
|
https://github.com/huggingface/datasets/issues/4566
| 1,284,397,594
|
I_kwDODunzps5Mjloa
| 4,566
|
Document link #load_dataset_enhancing_performance points to nowhere
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/11674033?v=4",
"events_url": "https://api.github.com/users/subercui/events{/privacy}",
"followers_url": "https://api.github.com/users/subercui/followers",
"following_url": "https://api.github.com/users/subercui/following{/other_user}",
"gists_url": "https://api.github.com/users/subercui/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/subercui",
"id": 11674033,
"login": "subercui",
"node_id": "MDQ6VXNlcjExNjc0MDMz",
"organizations_url": "https://api.github.com/users/subercui/orgs",
"received_events_url": "https://api.github.com/users/subercui/received_events",
"repos_url": "https://api.github.com/users/subercui/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/subercui/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/subercui/subscriptions",
"type": "User",
"url": "https://api.github.com/users/subercui",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[
"Hi! This is indeed the link the docstring should point to. Are you interested in submitting a PR to fix this?",
"https://github.com/huggingface/datasets/blame/master/docs/source/cache.mdx#L93\r\n\r\nThere seems already an anchor here. Somehow it doesn't work. I am not very familiar with how this online documentation works."
] | 2022-06-25T01:18:19Z
| 2023-01-24T16:33:40Z
| 2023-01-24T16:33:40Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
## Describe the bug
A clear and concise description of what the bug is.

The [load_dataset_enhancing_performance](https://huggingface.co/docs/datasets/v2.3.2/en/package_reference/main_classes#load_dataset_enhancing_performance) link [here](https://huggingface.co/docs/datasets/v2.3.2/en/package_reference/main_classes#datasets.Dataset.load_from_disk.keep_in_memory) points to nowhere, I guess it should point to https://huggingface.co/docs/datasets/v2.3.2/en/cache#improve-performance?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4566/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4566/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5672
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5672/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5672/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5672/events
|
https://github.com/huggingface/datasets/issues/5672
| 1,641,005,322
|
I_kwDODunzps5hz8EK
| 5,672
|
Pushing dataset to hub crash
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/14275989?v=4",
"events_url": "https://api.github.com/users/tzvc/events{/privacy}",
"followers_url": "https://api.github.com/users/tzvc/followers",
"following_url": "https://api.github.com/users/tzvc/following{/other_user}",
"gists_url": "https://api.github.com/users/tzvc/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tzvc",
"id": 14275989,
"login": "tzvc",
"node_id": "MDQ6VXNlcjE0Mjc1OTg5",
"organizations_url": "https://api.github.com/users/tzvc/orgs",
"received_events_url": "https://api.github.com/users/tzvc/received_events",
"repos_url": "https://api.github.com/users/tzvc/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tzvc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tzvc/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tzvc",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi ! It's been fixed by https://github.com/huggingface/datasets/pull/5598. We're doing a new release tomorrow with the fix and you'll be able to push your 100k images ;)\r\n\r\nBasically `push_to_hub` used to fail if the remote repository already exists and has a README.md without dataset_info in the YAML tags.\r\n\r\nIn the meantime you can install datasets from source",
"Hi @lhoestq ,\r\n\r\nWhat version of datasets library fix this case? I am using the last `v2.10.1` and I get the same error.",
"We just released 2.11 which includes a fix :)"
] | 2023-03-26T17:42:13Z
| 2023-03-30T08:11:05Z
| 2023-03-30T08:11:05Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Uploading a dataset with `push_to_hub()` fails without error description.
### Steps to reproduce the bug
Hey there,
I've built a image dataset of 100k images + text pair as described here https://huggingface.co/docs/datasets/image_dataset#imagefolder
Now I'm trying to push it to the hub but I'm running into issues. First, I tried doing it via git directly, I added all the files in git lfs and pushed but I got hit with an error saying huggingface only accept up to 10k files in a folder.
So I'm now trying with the `push_to_hub()` func as follow:
```python
from datasets import load_dataset
import os
dataset = load_dataset("imagefolder", data_dir="./data", split="train")
dataset.push_to_hub("tzvc/organization-logos", token=os.environ.get('HF_TOKEN'))
```
But again, this produces an error:
```
Resolving data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 100212/100212 [00:00<00:00, 439108.61it/s]
Downloading and preparing dataset imagefolder/default to /home/contact_theochampion/.cache/huggingface/datasets/imagefolder/default-20567ffc703aa314/0.0.0/37fbb85cc714a338bea574ac6c7d0b5be5aff46c1862c1989b20e0771199e93f...
Downloading data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████| 100211/100211 [00:00<00:00, 149323.73it/s]
Downloading data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 15947.92it/s]
Extracting data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 2245.34it/s]
Dataset imagefolder downloaded and prepared to /home/contact_theochampion/.cache/huggingface/datasets/imagefolder/default-20567ffc703aa314/0.0.0/37fbb85cc714a338bea574ac6c7d0b5be5aff46c1862c1989b20e0771199e93f. Subsequent calls will reuse this data.
Resuming upload of the dataset shards.
Pushing dataset shards to the dataset hub: 100%|██████████████████████████████████████████████████████████████████████████████████████████████| 14/14 [00:31<00:00, 2.24s/it]
Downloading metadata: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 118/118 [00:00<00:00, 225kB/s]
Traceback (most recent call last):
File "/home/contact_theochampion/organization-logos/push_to_hub.py", line 5, in <module>
dataset.push_to_hub("tzvc/organization-logos", token=os.environ.get('HF_TOKEN'))
File "/home/contact_theochampion/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 5245, in push_to_hub
repo_info = dataset_infos[next(iter(dataset_infos))]
StopIteration
```
What could be happening here ?
### Expected behavior
The dataset is pushed to the hub
### Environment info
- `datasets` version: 2.10.1
- Platform: Linux-5.10.0-21-cloud-amd64-x86_64-with-glibc2.31
- Python version: 3.9.2
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5672/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5672/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5175
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5175/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5175/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5175/events
|
https://github.com/huggingface/datasets/issues/5175
| 1,428,696,231
|
I_kwDODunzps5VKCyn
| 5,175
|
Loading an external NER dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/112555442?v=4",
"events_url": "https://api.github.com/users/Taghreed7878/events{/privacy}",
"followers_url": "https://api.github.com/users/Taghreed7878/followers",
"following_url": "https://api.github.com/users/Taghreed7878/following{/other_user}",
"gists_url": "https://api.github.com/users/Taghreed7878/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Taghreed7878",
"id": 112555442,
"login": "Taghreed7878",
"node_id": "U_kgDOBrV1sg",
"organizations_url": "https://api.github.com/users/Taghreed7878/orgs",
"received_events_url": "https://api.github.com/users/Taghreed7878/received_events",
"repos_url": "https://api.github.com/users/Taghreed7878/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Taghreed7878/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Taghreed7878/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Taghreed7878",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-10-30T09:31:55Z
| 2022-11-01T13:15:49Z
| 2022-11-01T13:15:49Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
I need to use huggingface datasets to load a custom dataset similar to conll2003 but with more entities and each the files contain only two columns: word and ner tag.
I tried this code snnipet that I found here as an answer to a similar issue:
from datasets import Dataset
INPUT_COLUMNS = "ID Text NER".split()
def read_conll(file):
example = {col: [] for col in INPUT_COLUMNS}
idx = 0
with open(file) as f:
for line in f:
if line.startswith("-DOCSTART-") or line == "\n" or not line:
if example[next(iter(example))]:
yield idx, example
idx += 1
example = {col: [] for col in INPUT_COLUMNS}
else:
row_cols = line.split()
for i, col in enumerate(example):
example[col] = row_cols[i].rstrip()
train = Dataset.from_generator(read_conll, gen_kwargs={"file": "some_path"})
But the following error happened:
ValueError: Please pass `features` or at least one example when writing data
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/112555442?v=4",
"events_url": "https://api.github.com/users/Taghreed7878/events{/privacy}",
"followers_url": "https://api.github.com/users/Taghreed7878/followers",
"following_url": "https://api.github.com/users/Taghreed7878/following{/other_user}",
"gists_url": "https://api.github.com/users/Taghreed7878/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Taghreed7878",
"id": 112555442,
"login": "Taghreed7878",
"node_id": "U_kgDOBrV1sg",
"organizations_url": "https://api.github.com/users/Taghreed7878/orgs",
"received_events_url": "https://api.github.com/users/Taghreed7878/received_events",
"repos_url": "https://api.github.com/users/Taghreed7878/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Taghreed7878/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Taghreed7878/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Taghreed7878",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5175/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5175/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5589
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5589/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5589/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5589/events
|
https://github.com/huggingface/datasets/pull/5589
| 1,603,535,704
|
PR_kwDODunzps5K9K1i
| 5,589
|
Revert "pass the dataset features to the IterableDataset.from_generator"
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008442 / 0.011353 (-0.002911) | 0.004567 / 0.011008 (-0.006441) | 0.100688 / 0.038508 (0.062180) | 0.029568 / 0.023109 (0.006459) | 0.306993 / 0.275898 (0.031095) | 0.362626 / 0.323480 (0.039146) | 0.006983 / 0.007986 (-0.001002) | 0.003424 / 0.004328 (-0.000905) | 0.079050 / 0.004250 (0.074799) | 0.036087 / 0.037052 (-0.000966) | 0.318205 / 0.258489 (0.059716) | 0.353882 / 0.293841 (0.060041) | 0.033091 / 0.128546 (-0.095455) | 0.011468 / 0.075646 (-0.064178) | 0.321125 / 0.419271 (-0.098146) | 0.040645 / 0.043533 (-0.002888) | 0.309827 / 0.255139 (0.054688) | 0.344848 / 0.283200 (0.061648) | 0.087100 / 0.141683 (-0.054583) | 1.465123 / 1.452155 (0.012968) | 1.499457 / 1.492716 (0.006741) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.171619 / 0.018006 (0.153613) | 0.410198 / 0.000490 (0.409709) | 0.002391 / 0.000200 (0.002191) | 0.000075 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022913 / 0.037411 (-0.014499) | 0.097275 / 0.014526 (0.082749) | 0.103902 / 0.176557 (-0.072655) | 0.148855 / 0.737135 (-0.588281) | 0.107247 / 0.296338 (-0.189092) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.413139 / 0.215209 (0.197930) | 4.131760 / 2.077655 (2.054105) | 1.854491 / 1.504120 (0.350371) | 1.625524 / 1.541195 (0.084329) | 1.666665 / 1.468490 (0.198175) | 0.687105 / 4.584777 (-3.897672) | 3.327124 / 3.745712 (-0.418588) | 1.830820 / 5.269862 (-3.439042) | 1.147930 / 4.565676 (-3.417746) | 0.081586 / 0.424275 (-0.342689) | 0.012422 / 0.007607 (0.004815) | 0.523723 / 0.226044 (0.297678) | 5.246977 / 2.268929 (2.978049) | 2.288350 / 55.444624 (-53.156275) | 1.933740 / 6.876477 (-4.942737) | 1.954356 / 2.142072 (-0.187716) | 0.804434 / 4.805227 (-4.000793) | 0.147621 / 6.500664 (-6.353043) | 0.064835 / 0.075469 (-0.010634) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.244841 / 1.841788 (-0.596947) | 13.758465 / 8.074308 (5.684157) | 13.984576 / 10.191392 (3.793184) | 0.144860 / 0.680424 (-0.535564) | 0.028616 / 0.534201 (-0.505584) | 0.401928 / 0.579283 (-0.177355) | 0.415294 / 0.434364 (-0.019069) | 0.476483 / 0.540337 (-0.063854) | 0.569257 / 1.386936 (-0.817679) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006556 / 0.011353 (-0.004797) | 0.004502 / 0.011008 (-0.006507) | 0.074828 / 0.038508 (0.036319) | 0.027537 / 0.023109 (0.004427) | 0.339961 / 0.275898 (0.064063) | 0.372491 / 0.323480 (0.049011) | 0.005010 / 0.007986 (-0.002976) | 0.004624 / 0.004328 (0.000295) | 0.074459 / 0.004250 (0.070208) | 0.037539 / 0.037052 (0.000486) | 0.341031 / 0.258489 (0.082542) | 0.383397 / 0.293841 (0.089556) | 0.031706 / 0.128546 (-0.096840) | 0.011542 / 0.075646 (-0.064104) | 0.084882 / 0.419271 (-0.334389) | 0.041860 / 0.043533 (-0.001673) | 0.338699 / 0.255139 (0.083560) | 0.365666 / 0.283200 (0.082467) | 0.088966 / 0.141683 (-0.052717) | 1.502493 / 1.452155 (0.050339) | 1.570746 / 1.492716 (0.078030) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.217547 / 0.018006 (0.199541) | 0.392407 / 0.000490 (0.391918) | 0.000388 / 0.000200 (0.000188) | 0.000058 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024571 / 0.037411 (-0.012840) | 0.099259 / 0.014526 (0.084734) | 0.107850 / 0.176557 (-0.068707) | 0.157686 / 0.737135 (-0.579449) | 0.109761 / 0.296338 (-0.186578) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434791 / 0.215209 (0.219582) | 4.323099 / 2.077655 (2.245444) | 2.063610 / 1.504120 (0.559490) | 1.866136 / 1.541195 (0.324941) | 1.910185 / 1.468490 (0.441695) | 0.696584 / 4.584777 (-3.888193) | 3.398017 / 3.745712 (-0.347695) | 1.848473 / 5.269862 (-3.421388) | 1.168238 / 4.565676 (-3.397438) | 0.083222 / 0.424275 (-0.341053) | 0.012332 / 0.007607 (0.004725) | 0.538953 / 0.226044 (0.312909) | 5.421273 / 2.268929 (3.152344) | 2.499877 / 55.444624 (-52.944747) | 2.161853 / 6.876477 (-4.714624) | 2.183941 / 2.142072 (0.041868) | 0.803916 / 4.805227 (-4.001311) | 0.150266 / 6.500664 (-6.350398) | 0.067399 / 0.075469 (-0.008070) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.280479 / 1.841788 (-0.561309) | 13.728074 / 8.074308 (5.653766) | 12.946098 / 10.191392 (2.754706) | 0.128459 / 0.680424 (-0.551965) | 0.016567 / 0.534201 (-0.517634) | 0.374461 / 0.579283 (-0.204822) | 0.386973 / 0.434364 (-0.047391) | 0.459754 / 0.540337 (-0.080583) | 0.543870 / 1.386936 (-0.843066) |\n\n</details>\n</details>\n\n\n",
"Instead of reverting the change, maybe we can use the same conversion in `to_iterable_dataset` as in `ArrowBasedBuilder._as_streaming_dataset` to avoid decoding images twice?",
"True, let me take a look",
"Closing in favor of https://github.com/huggingface/datasets/pull/5655"
] | 2023-02-28T17:52:04Z
| 2023-09-24T10:07:33Z
| 2023-03-21T14:18:18Z
|
MEMBER
| null | null | null |
This reverts commit b91070b9c09673e2e148eec458036ab6a62ac042 (temporarily)
It hurts iterable dataset performance a lot (e.g. x4 slower because it encodes+decodes images unnecessarily). I think we need to fix this before re-adding it
cc @mariosasko @Hubert-Bonisseur
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5589/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5589/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5589.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5589",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5589.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5589"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6845
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6845/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6845/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6845/events
|
https://github.com/huggingface/datasets/issues/6845
| 2,265,876,551
|
I_kwDODunzps6HDohH
| 6,845
|
load_dataset doesn't support list column
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16257131?v=4",
"events_url": "https://api.github.com/users/arthasking123/events{/privacy}",
"followers_url": "https://api.github.com/users/arthasking123/followers",
"following_url": "https://api.github.com/users/arthasking123/following{/other_user}",
"gists_url": "https://api.github.com/users/arthasking123/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/arthasking123",
"id": 16257131,
"login": "arthasking123",
"node_id": "MDQ6VXNlcjE2MjU3MTMx",
"organizations_url": "https://api.github.com/users/arthasking123/orgs",
"received_events_url": "https://api.github.com/users/arthasking123/received_events",
"repos_url": "https://api.github.com/users/arthasking123/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/arthasking123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arthasking123/subscriptions",
"type": "User",
"url": "https://api.github.com/users/arthasking123",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"I encountered this same issue when loading a customized dataset for ORPO training, in which there were three columns and two of them were lists. \r\nI debugged and found that it might be caused by the type-infer mechanism and because in some chunks one of the columns is always an empty list ([]), it was regarded as ```list<item: null>```, however in some other chunk it was ```list<item: string>```. This triggered a TypeError running the function ```table_cast()```.\r\n\r\nI temporarily fixed this by re-dumping the file into a regular JSON format instead of lines of JSON dict. I didn't dig deeper for the lack of knowledge and programming ability but I do hope some developer of this repo will find and fix it."
] | 2024-04-26T14:11:44Z
| 2024-05-15T12:06:59Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
dataset = load_dataset("Doraemon-AI/text-to-neo4j-cypher-chinese")
got exception:
Generating train split: 1834 examples [00:00, 5227.98 examples/s]
Traceback (most recent call last):
File "/usr/local/lib/python3.11/dist-packages/datasets/builder.py", line 2011, in _prepare_split_single
writer.write_table(table)
File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_writer.py", line 585, in write_table
pa_table = table_cast(pa_table, self._schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 2295, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 2254, in cast_table_to_schema
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 2254, in <listcomp>
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 1802, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 1802, in <listcomp>
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 2018, in cast_array_to_feature
casted_array_values = _c(array.values, feature[0])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 1804, in wrapper
return func(array, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 2115, in cast_array_to_feature
raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
TypeError: Couldn't cast array of type
struct<m.name: string, x.name: string, p.name: string, n.name: string, h.name: string, name: string, c: int64, collect(r.name): list<item: string>, q.name: string, rel.name: string, count(p): int64, 1: int64, p.location: string, max(n.name): null, mn.name: string, p.time: int64, min(q.name): string>
to
{'q.name': Value(dtype='string', id=None), 'mn.name': Value(dtype='string', id=None), 'x.name': Value(dtype='string', id=None), 'p.name': Value(dtype='string', id=None), 'n.name': Value(dtype='string', id=None), 'name': Value(dtype='string', id=None), 'm.name': Value(dtype='string', id=None), 'h.name': Value(dtype='string', id=None), 'count(p)': Value(dtype='int64', id=None), 'rel.name': Value(dtype='string', id=None), 'c': Value(dtype='int64', id=None), 'collect(r.name)': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), '1': Value(dtype='int64', id=None), 'p.location': Value(dtype='string', id=None), 'substring(h.name,0,5)': Value(dtype='string', id=None), 'p.time': Value(dtype='int64', id=None)}
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/ubuntu/llm/train-2.py", line 150, in <module>
dataset = load_dataset("Doraemon-AI/text-to-neo4j-cypher-chinese")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/load.py", line 2609, in load_dataset
builder_instance.download_and_prepare(
File "/usr/local/lib/python3.11/dist-packages/datasets/builder.py", line 1027, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.11/dist-packages/datasets/builder.py", line 1122, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.11/dist-packages/datasets/builder.py", line 1882, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/usr/local/lib/python3.11/dist-packages/datasets/builder.py", line 2038, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
### Steps to reproduce the bug
dataset = load_dataset("Doraemon-AI/text-to-neo4j-cypher-chinese")
### Expected behavior
no exception
### Environment info
python 3.11
datasets 2.19.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6845/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6845/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6582
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6582/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6582/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6582/events
|
https://github.com/huggingface/datasets/pull/6582
| 2,076,072,101
|
PR_kwDODunzps5jxpry
| 6,582
|
Fix for Incorrect ex_iterable used with multi num_worker
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/136600500?v=4",
"events_url": "https://api.github.com/users/kq-chen/events{/privacy}",
"followers_url": "https://api.github.com/users/kq-chen/followers",
"following_url": "https://api.github.com/users/kq-chen/following{/other_user}",
"gists_url": "https://api.github.com/users/kq-chen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kq-chen",
"id": 136600500,
"login": "kq-chen",
"node_id": "U_kgDOCCRbtA",
"organizations_url": "https://api.github.com/users/kq-chen/orgs",
"received_events_url": "https://api.github.com/users/kq-chen/received_events",
"repos_url": "https://api.github.com/users/kq-chen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kq-chen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kq-chen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kq-chen",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"A toy example to reveal the bug.\r\n\r\n```python\r\n\"\"\"\r\nDATASETS_VERBOSITY=debug torchrun --nproc-per-node 2 main.py \r\n\"\"\"\r\nimport torch.utils.data\r\nimport torch.distributed\r\nimport datasets.distributed\r\nimport datasets\r\n\r\n# num shards = 4\r\nshards = [(0, 100), (100, 200), (200, 300), (300, 400)]\r\n\r\n\r\ndef gen(shards):\r\n for st, ed in shards:\r\n yield from range(st, ed)\r\n\r\ntorch.distributed.init_process_group()\r\n\r\n# want to create total worker = world_size * 8\r\nds = datasets.IterableDataset.from_generator(gen, gen_kwargs={'shards': shards})\r\nds = datasets.distributed.split_dataset_by_node(\r\n ds,\r\n rank=torch.distributed.get_rank(),\r\n world_size=torch.distributed.get_world_size(),\r\n)\r\ndl = torch.utils.data.DataLoader(ds, batch_size=10, num_workers=8)\r\n\r\nfor x in dl:\r\n print(f\"RANK={torch.distributed.get_rank()} {x}\")\r\n```",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005401 / 0.011353 (-0.005952) | 0.004023 / 0.011008 (-0.006985) | 0.064601 / 0.038508 (0.026093) | 0.028567 / 0.023109 (0.005457) | 0.245476 / 0.275898 (-0.030422) | 0.292727 / 0.323480 (-0.030752) | 0.003080 / 0.007986 (-0.004905) | 0.002779 / 0.004328 (-0.001549) | 0.050046 / 0.004250 (0.045796) | 0.043906 / 0.037052 (0.006854) | 0.273896 / 0.258489 (0.015407) | 0.308430 / 0.293841 (0.014589) | 0.028442 / 0.128546 (-0.100104) | 0.010694 / 0.075646 (-0.064953) | 0.209048 / 0.419271 (-0.210223) | 0.036062 / 0.043533 (-0.007471) | 0.242689 / 0.255139 (-0.012450) | 0.261695 / 0.283200 (-0.021504) | 0.018519 / 0.141683 (-0.123163) | 1.122735 / 1.452155 (-0.329420) | 1.172680 / 1.492716 (-0.320036) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093827 / 0.018006 (0.075820) | 0.302650 / 0.000490 (0.302161) | 0.000218 / 0.000200 (0.000018) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018778 / 0.037411 (-0.018633) | 0.067516 / 0.014526 (0.052990) | 0.079693 / 0.176557 (-0.096864) | 0.125907 / 0.737135 (-0.611228) | 0.081771 / 0.296338 (-0.214568) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.281809 / 0.215209 (0.066600) | 2.773937 / 2.077655 (0.696283) | 1.443622 / 1.504120 (-0.060497) | 1.334359 / 1.541195 (-0.206836) | 1.364813 / 1.468490 (-0.103677) | 0.561670 / 4.584777 (-4.023107) | 2.338292 / 3.745712 (-1.407420) | 2.807595 / 5.269862 (-2.462267) | 1.734162 / 4.565676 (-2.831514) | 0.063681 / 0.424275 (-0.360594) | 0.004934 / 0.007607 (-0.002673) | 0.336781 / 0.226044 (0.110737) | 3.311744 / 2.268929 (1.042815) | 1.826802 / 55.444624 (-53.617822) | 1.579604 / 6.876477 (-5.296872) | 1.620526 / 2.142072 (-0.521546) | 0.647061 / 4.805227 (-4.158166) | 0.117729 / 6.500664 (-6.382935) | 0.042216 / 0.075469 (-0.033253) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.994289 / 1.841788 (-0.847499) | 12.266185 / 8.074308 (4.191877) | 9.634035 / 10.191392 (-0.557357) | 0.144521 / 0.680424 (-0.535902) | 0.013787 / 0.534201 (-0.520414) | 0.288353 / 0.579283 (-0.290930) | 0.262183 / 0.434364 (-0.172181) | 0.336960 / 0.540337 (-0.203378) | 0.441142 / 1.386936 (-0.945794) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005678 / 0.011353 (-0.005675) | 0.004011 / 0.011008 (-0.006998) | 0.049319 / 0.038508 (0.010811) | 0.032543 / 0.023109 (0.009434) | 0.276389 / 0.275898 (0.000491) | 0.298495 / 0.323480 (-0.024985) | 0.004192 / 0.007986 (-0.003794) | 0.002765 / 0.004328 (-0.001563) | 0.048739 / 0.004250 (0.044489) | 0.046212 / 0.037052 (0.009160) | 0.286614 / 0.258489 (0.028125) | 0.315949 / 0.293841 (0.022108) | 0.029833 / 0.128546 (-0.098714) | 0.010762 / 0.075646 (-0.064884) | 0.058489 / 0.419271 (-0.360783) | 0.052258 / 0.043533 (0.008725) | 0.275873 / 0.255139 (0.020734) | 0.288668 / 0.283200 (0.005468) | 0.018828 / 0.141683 (-0.122855) | 1.140196 / 1.452155 (-0.311959) | 1.229500 / 1.492716 (-0.263217) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094161 / 0.018006 (0.076155) | 0.303519 / 0.000490 (0.303030) | 0.000219 / 0.000200 (0.000019) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022088 / 0.037411 (-0.015324) | 0.076376 / 0.014526 (0.061850) | 0.088705 / 0.176557 (-0.087851) | 0.127602 / 0.737135 (-0.609533) | 0.088689 / 0.296338 (-0.207649) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292363 / 0.215209 (0.077154) | 2.859215 / 2.077655 (0.781561) | 1.566389 / 1.504120 (0.062270) | 1.439195 / 1.541195 (-0.102000) | 1.463805 / 1.468490 (-0.004685) | 0.551660 / 4.584777 (-4.033116) | 2.427462 / 3.745712 (-1.318250) | 2.712372 / 5.269862 (-2.557490) | 1.811331 / 4.565676 (-2.754346) | 0.061539 / 0.424275 (-0.362736) | 0.005062 / 0.007607 (-0.002545) | 0.341984 / 0.226044 (0.115940) | 3.352171 / 2.268929 (1.083242) | 1.917550 / 55.444624 (-53.527074) | 1.642668 / 6.876477 (-5.233809) | 1.817204 / 2.142072 (-0.324868) | 0.630849 / 4.805227 (-4.174379) | 0.115788 / 6.500664 (-6.384876) | 0.041041 / 0.075469 (-0.034428) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.017725 / 1.841788 (-0.824062) | 12.976994 / 8.074308 (4.902686) | 10.307414 / 10.191392 (0.116022) | 0.141090 / 0.680424 (-0.539334) | 0.015548 / 0.534201 (-0.518653) | 0.288184 / 0.579283 (-0.291099) | 0.276409 / 0.434364 (-0.157955) | 0.328289 / 0.540337 (-0.212048) | 0.429138 / 1.386936 (-0.957798) |\n\n</details>\n</details>\n\n\n"
] | 2024-01-11T08:49:43Z
| 2024-03-01T19:09:14Z
| 2024-03-01T19:02:33Z
|
CONTRIBUTOR
| null | null | null |
Corrects an issue where `self._ex_iterable` was erroneously used instead of `ex_iterable`, when both Distributed Data Parallel (DDP) and multi num_worker are used concurrently. This improper usage led to the generation of incorrect `shards_indices`, subsequently causing issues with the control flow responsible for worker creation. The fix ensures the appropriate iterable is used, thus providing a more accurate determination of whether a new worker should be instantiated or not.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6582/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6582/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6582.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6582",
"merged_at": "2024-03-01T19:02:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6582.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6582"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6710
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6710/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6710/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6710/events
|
https://github.com/huggingface/datasets/pull/6710
| 2,164,781,564
|
PR_kwDODunzps5oe4ov
| 6,710
|
Persist IterableDataset epoch in workers
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6710). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005283 / 0.011353 (-0.006070) | 0.003866 / 0.011008 (-0.007142) | 0.063124 / 0.038508 (0.024616) | 0.030240 / 0.023109 (0.007131) | 0.232855 / 0.275898 (-0.043043) | 0.257538 / 0.323480 (-0.065942) | 0.004165 / 0.007986 (-0.003820) | 0.002826 / 0.004328 (-0.001502) | 0.049735 / 0.004250 (0.045485) | 0.045297 / 0.037052 (0.008244) | 0.251831 / 0.258489 (-0.006658) | 0.277812 / 0.293841 (-0.016029) | 0.030004 / 0.128546 (-0.098542) | 0.012319 / 0.075646 (-0.063328) | 0.206881 / 0.419271 (-0.212391) | 0.036561 / 0.043533 (-0.006972) | 0.234364 / 0.255139 (-0.020775) | 0.258316 / 0.283200 (-0.024884) | 0.017815 / 0.141683 (-0.123867) | 1.114111 / 1.452155 (-0.338043) | 1.165428 / 1.492716 (-0.327288) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099302 / 0.018006 (0.081296) | 0.309195 / 0.000490 (0.308705) | 0.000261 / 0.000200 (0.000061) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018765 / 0.037411 (-0.018646) | 0.063123 / 0.014526 (0.048597) | 0.075437 / 0.176557 (-0.101119) | 0.122570 / 0.737135 (-0.614566) | 0.076637 / 0.296338 (-0.219702) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289965 / 0.215209 (0.074756) | 2.839053 / 2.077655 (0.761398) | 1.503463 / 1.504120 (-0.000657) | 1.390833 / 1.541195 (-0.150361) | 1.401918 / 1.468490 (-0.066572) | 0.711000 / 4.584777 (-3.873777) | 2.325513 / 3.745712 (-1.420199) | 2.831630 / 5.269862 (-2.438231) | 1.908370 / 4.565676 (-2.657307) | 0.077867 / 0.424275 (-0.346408) | 0.005509 / 0.007607 (-0.002098) | 0.336494 / 0.226044 (0.110450) | 3.358587 / 2.268929 (1.089658) | 1.901067 / 55.444624 (-53.543558) | 1.590130 / 6.876477 (-5.286347) | 1.753850 / 2.142072 (-0.388223) | 0.792458 / 4.805227 (-4.012769) | 0.135584 / 6.500664 (-6.365080) | 0.042028 / 0.075469 (-0.033441) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.966162 / 1.841788 (-0.875625) | 11.705310 / 8.074308 (3.631002) | 9.158842 / 10.191392 (-1.032550) | 0.128793 / 0.680424 (-0.551631) | 0.014422 / 0.534201 (-0.519779) | 0.299009 / 0.579283 (-0.280274) | 0.262873 / 0.434364 (-0.171491) | 0.340836 / 0.540337 (-0.199501) | 0.464440 / 1.386936 (-0.922496) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005951 / 0.011353 (-0.005402) | 0.003984 / 0.011008 (-0.007024) | 0.051432 / 0.038508 (0.012924) | 0.033223 / 0.023109 (0.010113) | 0.263972 / 0.275898 (-0.011926) | 0.289060 / 0.323480 (-0.034420) | 0.004446 / 0.007986 (-0.003540) | 0.002891 / 0.004328 (-0.001438) | 0.049347 / 0.004250 (0.045096) | 0.041191 / 0.037052 (0.004138) | 0.278334 / 0.258489 (0.019844) | 0.314065 / 0.293841 (0.020224) | 0.032020 / 0.128546 (-0.096526) | 0.012472 / 0.075646 (-0.063174) | 0.061288 / 0.419271 (-0.357984) | 0.033489 / 0.043533 (-0.010044) | 0.266831 / 0.255139 (0.011692) | 0.283008 / 0.283200 (-0.000192) | 0.018491 / 0.141683 (-0.123192) | 1.133634 / 1.452155 (-0.318521) | 1.154627 / 1.492716 (-0.338089) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.101831 / 0.018006 (0.083825) | 0.317942 / 0.000490 (0.317452) | 0.000217 / 0.000200 (0.000018) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022608 / 0.037411 (-0.014803) | 0.076776 / 0.014526 (0.062250) | 0.088686 / 0.176557 (-0.087870) | 0.129092 / 0.737135 (-0.608044) | 0.090780 / 0.296338 (-0.205558) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286762 / 0.215209 (0.071553) | 2.824307 / 2.077655 (0.746652) | 1.547215 / 1.504120 (0.043095) | 1.424522 / 1.541195 (-0.116673) | 1.446414 / 1.468490 (-0.022076) | 0.723683 / 4.584777 (-3.861094) | 0.974129 / 3.745712 (-2.771583) | 2.952552 / 5.269862 (-2.317309) | 1.903663 / 4.565676 (-2.662013) | 0.078786 / 0.424275 (-0.345489) | 0.005130 / 0.007607 (-0.002477) | 0.338925 / 0.226044 (0.112881) | 3.378557 / 2.268929 (1.109629) | 1.892951 / 55.444624 (-53.551674) | 1.599844 / 6.876477 (-5.276633) | 1.611963 / 2.142072 (-0.530109) | 0.793614 / 4.805227 (-4.011613) | 0.133795 / 6.500664 (-6.366869) | 0.040777 / 0.075469 (-0.034692) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.001391 / 1.841788 (-0.840397) | 12.166811 / 8.074308 (4.092503) | 10.588180 / 10.191392 (0.396788) | 0.141609 / 0.680424 (-0.538815) | 0.020941 / 0.534201 (-0.513260) | 0.340149 / 0.579283 (-0.239134) | 0.122988 / 0.434364 (-0.311376) | 0.339747 / 0.540337 (-0.200591) | 0.434338 / 1.386936 (-0.952598) |\n\n</details>\n</details>\n\n\n"
] | 2024-03-02T12:08:50Z
| 2024-07-01T17:51:25Z
| 2024-07-01T17:45:30Z
|
MEMBER
| null | null | null |
Use shared memory for the IterableDataset epoch.
This way calling `ds.set_epoch()` in the main process will update the epoch in the DataLoader workers as well.
This is useful especially because the epoch is used to compute the `effective_seed` used for shuffling.
I used torch's shared memory in case users want to send dataset copies without shared memory using pickle. I also find it easier to use than using `multiprocessing.shared_memory` than requires unlinking only in the main process, or `mp.Value` that is not picklable.
close https://github.com/huggingface/datasets/issues/6673
cc @rwightman
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6710/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6710/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6710.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6710",
"merged_at": "2024-07-01T17:45:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6710.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6710"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4712
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4712/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4712/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4712/events
|
https://github.com/huggingface/datasets/pull/4712
| 1,309,177,302
|
PR_kwDODunzps47ohdr
| 4,712
|
Highlight non-commercial license in amazon_reviews_multi dataset card
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/108879611?v=4",
"events_url": "https://api.github.com/users/sbroadhurst-hf/events{/privacy}",
"followers_url": "https://api.github.com/users/sbroadhurst-hf/followers",
"following_url": "https://api.github.com/users/sbroadhurst-hf/following{/other_user}",
"gists_url": "https://api.github.com/users/sbroadhurst-hf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sbroadhurst-hf",
"id": 108879611,
"login": "sbroadhurst-hf",
"node_id": "U_kgDOBn1e-w",
"organizations_url": "https://api.github.com/users/sbroadhurst-hf/orgs",
"received_events_url": "https://api.github.com/users/sbroadhurst-hf/received_events",
"repos_url": "https://api.github.com/users/sbroadhurst-hf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sbroadhurst-hf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sbroadhurst-hf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sbroadhurst-hf",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-07-19T08:36:20Z
| 2022-07-27T16:09:40Z
| 2022-07-27T15:57:41Z
|
CONTRIBUTOR
| null | null | null |
Highlight that the licence granted by Amazon only covers non-commercial research use.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4712/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4712/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4712.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4712",
"merged_at": "2022-07-27T15:57:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4712.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4712"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6954
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6954/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6954/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6954/events
|
https://github.com/huggingface/datasets/pull/6954
| 2,333,530,558
|
PR_kwDODunzps5xbWtU
| 6,954
|
Remove default `trust_remote_code=True`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6954). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"yay! 🎉 ",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004881 / 0.011353 (-0.006472) | 0.003246 / 0.011008 (-0.007762) | 0.062496 / 0.038508 (0.023988) | 0.030760 / 0.023109 (0.007651) | 0.241500 / 0.275898 (-0.034398) | 0.272073 / 0.323480 (-0.051407) | 0.004123 / 0.007986 (-0.003863) | 0.002796 / 0.004328 (-0.001533) | 0.049015 / 0.004250 (0.044764) | 0.047095 / 0.037052 (0.010043) | 0.257002 / 0.258489 (-0.001487) | 0.287602 / 0.293841 (-0.006239) | 0.027281 / 0.128546 (-0.101265) | 0.010132 / 0.075646 (-0.065514) | 0.203699 / 0.419271 (-0.215572) | 0.036553 / 0.043533 (-0.006980) | 0.246221 / 0.255139 (-0.008918) | 0.268137 / 0.283200 (-0.015062) | 0.017260 / 0.141683 (-0.124423) | 1.100677 / 1.452155 (-0.351478) | 1.148367 / 1.492716 (-0.344349) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.102519 / 0.018006 (0.084513) | 0.301929 / 0.000490 (0.301439) | 0.000223 / 0.000200 (0.000023) | 0.000046 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018590 / 0.037411 (-0.018821) | 0.061615 / 0.014526 (0.047089) | 0.074579 / 0.176557 (-0.101978) | 0.121415 / 0.737135 (-0.615720) | 0.075696 / 0.296338 (-0.220642) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283842 / 0.215209 (0.068633) | 2.788321 / 2.077655 (0.710666) | 1.481376 / 1.504120 (-0.022743) | 1.356064 / 1.541195 (-0.185131) | 1.380592 / 1.468490 (-0.087898) | 0.575577 / 4.584777 (-4.009199) | 2.471858 / 3.745712 (-1.273854) | 2.760769 / 5.269862 (-2.509093) | 1.808638 / 4.565676 (-2.757038) | 0.064930 / 0.424275 (-0.359345) | 0.005056 / 0.007607 (-0.002551) | 0.337794 / 0.226044 (0.111750) | 3.359444 / 2.268929 (1.090515) | 1.829540 / 55.444624 (-53.615084) | 1.518660 / 6.876477 (-5.357817) | 1.671612 / 2.142072 (-0.470460) | 0.664286 / 4.805227 (-4.140941) | 0.119593 / 6.500664 (-6.381071) | 0.042519 / 0.075469 (-0.032950) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.993152 / 1.841788 (-0.848636) | 11.733054 / 8.074308 (3.658746) | 9.746734 / 10.191392 (-0.444658) | 0.143026 / 0.680424 (-0.537398) | 0.014900 / 0.534201 (-0.519301) | 0.292243 / 0.579283 (-0.287040) | 0.261301 / 0.434364 (-0.173063) | 0.330838 / 0.540337 (-0.209500) | 0.523719 / 1.386936 (-0.863217) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005707 / 0.011353 (-0.005646) | 0.003523 / 0.011008 (-0.007485) | 0.052265 / 0.038508 (0.013757) | 0.034296 / 0.023109 (0.011187) | 0.266589 / 0.275898 (-0.009309) | 0.288441 / 0.323480 (-0.035039) | 0.004507 / 0.007986 (-0.003478) | 0.002745 / 0.004328 (-0.001583) | 0.049417 / 0.004250 (0.045167) | 0.042679 / 0.037052 (0.005627) | 0.278518 / 0.258489 (0.020029) | 0.328751 / 0.293841 (0.034911) | 0.029530 / 0.128546 (-0.099016) | 0.010373 / 0.075646 (-0.065274) | 0.058207 / 0.419271 (-0.361064) | 0.033434 / 0.043533 (-0.010099) | 0.267902 / 0.255139 (0.012763) | 0.288192 / 0.283200 (0.004993) | 0.018866 / 0.141683 (-0.122817) | 1.132734 / 1.452155 (-0.319421) | 1.172879 / 1.492716 (-0.319837) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097787 / 0.018006 (0.079780) | 0.305509 / 0.000490 (0.305019) | 0.000268 / 0.000200 (0.000068) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023230 / 0.037411 (-0.014181) | 0.076637 / 0.014526 (0.062111) | 0.088386 / 0.176557 (-0.088171) | 0.131079 / 0.737135 (-0.606057) | 0.091142 / 0.296338 (-0.205197) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295586 / 0.215209 (0.080377) | 2.872090 / 2.077655 (0.794435) | 1.538152 / 1.504120 (0.034032) | 1.405695 / 1.541195 (-0.135500) | 1.421058 / 1.468490 (-0.047432) | 0.561179 / 4.584777 (-4.023598) | 0.943954 / 3.745712 (-2.801758) | 2.684381 / 5.269862 (-2.585481) | 1.757457 / 4.565676 (-2.808220) | 0.062903 / 0.424275 (-0.361372) | 0.004998 / 0.007607 (-0.002610) | 0.370290 / 0.226044 (0.144245) | 3.374988 / 2.268929 (1.106059) | 1.899282 / 55.444624 (-53.545342) | 1.598787 / 6.876477 (-5.277690) | 1.735371 / 2.142072 (-0.406702) | 0.647367 / 4.805227 (-4.157860) | 0.116975 / 6.500664 (-6.383689) | 0.040811 / 0.075469 (-0.034658) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.996380 / 1.841788 (-0.845408) | 12.225657 / 8.074308 (4.151349) | 10.291221 / 10.191392 (0.099829) | 0.142791 / 0.680424 (-0.537633) | 0.016087 / 0.534201 (-0.518114) | 0.299978 / 0.579283 (-0.279305) | 0.149444 / 0.434364 (-0.284920) | 0.321354 / 0.540337 (-0.218984) | 0.414492 / 1.386936 (-0.972444) |\n\n</details>\n</details>\n\n\n",
"@lhoestq Thanks for the PR, Is there a way to detect if `trust_remote_code=True` will be required for loading the dataset, without loading it? It would be great if you could please point me to the relevant documentation.",
"You can check the presence of a python loading script in the repository.\r\n\r\nIf there is a .py file named after the repository name, then it requires trust_remote_code.",
"Thanks @lhoestq for the reference."
] | 2024-06-04T13:22:56Z
| 2024-06-17T16:32:24Z
| 2024-06-07T12:20:29Z
|
MEMBER
| null | null | null |
TODO:
- [x] fix tests
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6954/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6954/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6954.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6954",
"merged_at": "2024-06-07T12:20:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6954.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6954"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7513
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7513/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7513/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7513/events
|
https://github.com/huggingface/datasets/issues/7513
| 2,994,678,437
|
I_kwDODunzps6yfyql
| 7,513
|
MemoryError while creating dataset from generator
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/43753582?v=4",
"events_url": "https://api.github.com/users/simonreise/events{/privacy}",
"followers_url": "https://api.github.com/users/simonreise/followers",
"following_url": "https://api.github.com/users/simonreise/following{/other_user}",
"gists_url": "https://api.github.com/users/simonreise/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/simonreise",
"id": 43753582,
"login": "simonreise",
"node_id": "MDQ6VXNlcjQzNzUzNTgy",
"organizations_url": "https://api.github.com/users/simonreise/orgs",
"received_events_url": "https://api.github.com/users/simonreise/received_events",
"repos_url": "https://api.github.com/users/simonreise/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/simonreise/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/simonreise/subscriptions",
"type": "User",
"url": "https://api.github.com/users/simonreise",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Upd: created a PR that can probably solve the problem: #7514",
"Hi ! We need to take the generator into account for the cache. The generator is hashed to make the dataset fingerprint used by the cache. This way you can reload the Dataset from the cache without regenerating in subsequent `from_generator` calls.\n\nMaybe instead of removing generator from the hasher input, we can let users pass their own Dataset fingerprint to `from_generator`, and if it's specified we don't need to hash anything",
"Upd: I successfully generated a dataset from my large geospatial data with `generator` excluded from hashing and saved it to disk without running into memory errors. So, it looks like there are no other bottlenecks in dataset generation in my case\n\nMaybe letting users pass their own fingerprint to skip hashing can be a great solution to that issue!",
"@lhoestq I tried to implement user-defined dataset fingerprint in #7533 . Am I doing it right?"
] | 2025-04-15T01:02:02Z
| 2025-04-23T19:37:08Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
# TL:DR
`Dataset.from_generator` function passes all of its arguments to `BuilderConfig.create_config_id`, including `generator` function itself. `BuilderConfig.create_config_id` function tries to hash all the args, which can take a large amount of time or even cause MemoryError if the dataset processed in a generator function is large enough.
Maybe we should pop `generator` from `config_kwargs_to_add_to_suffix` before hashing to avoid it.
# Full description
I have a pretty large spatial imagery dataset that is generated from two xbatcher.BatchGenerators via custom `dataset_generator` function that looks like this if simplified:
```
def dataset_generator():
for index in samples:
data_dict = {
"key": index,
"x": x_batches[index].data,
"y": y_batches[index].data,
}
yield data_dict
```
Then I use `datasets.Dataset.from_generator` to generate the dataset itself.
```
# Create dataset
ds = datasets.Dataset.from_generator(
dataset_generator,
features=feat,
cache_dir=(output / ".cache"),
)
```
It works nicely with pretty small data, but if the dataset is huge and barely fits in memory, it crashes with memory error:
<details>
<summary>Full stack trace</summary>
```
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\remote_sensing_processor\segmentation\semantic\tiles.py:248](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/remote_sensing_processor/segmentation/semantic/tiles.py#line=247), in generate_tiles(x, y, output, tile_size, shuffle, split, x_dtype, y_dtype, x_nodata, y_nodata)
245 yield data_dict
247 # Create dataset
--> 248 ds = datasets.Dataset.from_generator(
249 dataset_generator,
250 features=feat,
251 cache_dir=(output / ".cache"),
252 )
254 # Save dataset
255 ds.save_to_disk(output / name)
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\datasets\arrow_dataset.py:1105](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/datasets/arrow_dataset.py#line=1104), in Dataset.from_generator(generator, features, cache_dir, keep_in_memory, gen_kwargs, num_proc, split, **kwargs)
1052 """Create a Dataset from a generator.
1053
1054 Args:
(...) 1101 ```
1102 """
1103 from .io.generator import GeneratorDatasetInputStream
-> 1105 return GeneratorDatasetInputStream(
1106 generator=generator,
1107 features=features,
1108 cache_dir=cache_dir,
1109 keep_in_memory=keep_in_memory,
1110 gen_kwargs=gen_kwargs,
1111 num_proc=num_proc,
1112 split=split,
1113 **kwargs,
1114 ).read()
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\datasets\io\generator.py:29](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/datasets/io/generator.py#line=28), in GeneratorDatasetInputStream.__init__(self, generator, features, cache_dir, keep_in_memory, streaming, gen_kwargs, num_proc, split, **kwargs)
9 def __init__(
10 self,
11 generator: Callable,
(...) 19 **kwargs,
20 ):
21 super().__init__(
22 features=features,
23 cache_dir=cache_dir,
(...) 27 **kwargs,
28 )
---> 29 self.builder = Generator(
30 cache_dir=cache_dir,
31 features=features,
32 generator=generator,
33 gen_kwargs=gen_kwargs,
34 split=split,
35 **kwargs,
36 )
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\datasets\builder.py:343](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/datasets/builder.py#line=342), in DatasetBuilder.__init__(self, cache_dir, dataset_name, config_name, hash, base_path, info, features, token, repo_id, data_files, data_dir, storage_options, writer_batch_size, **config_kwargs)
341 config_kwargs["data_dir"] = data_dir
342 self.config_kwargs = config_kwargs
--> 343 self.config, self.config_id = self._create_builder_config(
344 config_name=config_name,
345 custom_features=features,
346 **config_kwargs,
347 )
349 # prepare info: DatasetInfo are a standardized dataclass across all datasets
350 # Prefill datasetinfo
351 if info is None:
352 # TODO FOR PACKAGED MODULES IT IMPORTS DATA FROM src/packaged_modules which doesn't make sense
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\datasets\builder.py:604](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/datasets/builder.py#line=603), in DatasetBuilder._create_builder_config(self, config_name, custom_features, **config_kwargs)
598 builder_config._resolve_data_files(
599 base_path=self.base_path,
600 download_config=DownloadConfig(token=self.token, storage_options=self.storage_options),
601 )
603 # compute the config id that is going to be used for caching
--> 604 config_id = builder_config.create_config_id(
605 config_kwargs,
606 custom_features=custom_features,
607 )
608 is_custom = (config_id not in self.builder_configs) and config_id != "default"
609 if is_custom:
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\datasets\builder.py:187](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/datasets/builder.py#line=186), in BuilderConfig.create_config_id(self, config_kwargs, custom_features)
185 suffix = Hasher.hash(config_kwargs_to_add_to_suffix)
186 else:
--> 187 suffix = Hasher.hash(config_kwargs_to_add_to_suffix)
189 if custom_features is not None:
190 m = Hasher()
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\datasets\fingerprint.py:188](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/datasets/fingerprint.py#line=187), in Hasher.hash(cls, value)
186 @classmethod
187 def hash(cls, value: Any) -> str:
--> 188 return cls.hash_bytes(dumps(value))
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\datasets\utils\_dill.py:109](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/datasets/utils/_dill.py#line=108), in dumps(obj)
107 """Pickle an object to a string."""
108 file = BytesIO()
--> 109 dump(obj, file)
110 return file.getvalue()
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\datasets\utils\_dill.py:103](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/datasets/utils/_dill.py#line=102), in dump(obj, file)
101 def dump(obj, file):
102 """Pickle an object to a file."""
--> 103 Pickler(file, recurse=True).dump(obj)
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\dill\_dill.py:420](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/dill/_dill.py#line=419), in Pickler.dump(self, obj)
418 def dump(self, obj): #NOTE: if settings change, need to update attributes
419 logger.trace_setup(self)
--> 420 StockPickler.dump(self, obj)
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:484](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=483), in _Pickler.dump(self, obj)
482 if self.proto >= 4:
483 self.framer.start_framing()
--> 484 self.save(obj)
485 self.write(STOP)
486 self.framer.end_framing()
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\datasets\utils\_dill.py:70](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/datasets/utils/_dill.py#line=69), in Pickler.save(self, obj, save_persistent_id)
68 if obj_type is FunctionType:
69 obj = getattr(obj, "_torchdynamo_orig_callable", obj)
---> 70 dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\dill\_dill.py:414](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/dill/_dill.py#line=413), in Pickler.save(self, obj, save_persistent_id)
412 msg = "Can't pickle %s: attribute lookup builtins.generator failed" % GeneratorType
413 raise PicklingError(msg)
--> 414 StockPickler.save(self, obj, save_persistent_id)
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:558](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=557), in _Pickler.save(self, obj, save_persistent_id)
556 f = self.dispatch.get(t)
557 if f is not None:
--> 558 f(self, obj) # Call unbound method with explicit self
559 return
561 # Check private dispatch table if any, or else
562 # copyreg.dispatch_table
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\dill\_dill.py:1217](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/dill/_dill.py#line=1216), in save_module_dict(pickler, obj)
1214 if is_dill(pickler, child=False) and pickler._session:
1215 # we only care about session the first pass thru
1216 pickler._first_pass = False
-> 1217 StockPickler.save_dict(pickler, obj)
1218 logger.trace(pickler, "# D2")
1219 return
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:990](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=989), in _Pickler.save_dict(self, obj)
987 self.write(MARK + DICT)
989 self.memoize(obj)
--> 990 self._batch_setitems(obj.items())
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\datasets\utils\_dill.py:83](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/datasets/utils/_dill.py#line=82), in Pickler._batch_setitems(self, items)
80 from datasets.fingerprint import Hasher
82 items = sorted(items, key=lambda x: Hasher.hash(x[0]))
---> 83 dill.Pickler._batch_setitems(self, items)
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:1014](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=1013), in _Pickler._batch_setitems(self, items)
1012 for k, v in tmp:
1013 save(k)
-> 1014 save(v)
1015 write(SETITEMS)
1016 elif n:
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\datasets\utils\_dill.py:70](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/datasets/utils/_dill.py#line=69), in Pickler.save(self, obj, save_persistent_id)
68 if obj_type is FunctionType:
69 obj = getattr(obj, "_torchdynamo_orig_callable", obj)
---> 70 dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\dill\_dill.py:414](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/dill/_dill.py#line=413), in Pickler.save(self, obj, save_persistent_id)
412 msg = "Can't pickle %s: attribute lookup builtins.generator failed" % GeneratorType
413 raise PicklingError(msg)
--> 414 StockPickler.save(self, obj, save_persistent_id)
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:558](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=557), in _Pickler.save(self, obj, save_persistent_id)
556 f = self.dispatch.get(t)
557 if f is not None:
--> 558 f(self, obj) # Call unbound method with explicit self
559 return
561 # Check private dispatch table if any, or else
562 # copyreg.dispatch_table
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\dill\_dill.py:1985](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/dill/_dill.py#line=1984), in save_function(pickler, obj)
1982 if state_dict:
1983 state = state, state_dict
-> 1985 _save_with_postproc(pickler, (_create_function, (
1986 obj.__code__, globs, obj.__name__, obj.__defaults__,
1987 closure
1988 ), state), obj=obj, postproc_list=postproc_list)
1990 # Lift closure cell update to earliest function (#458)
1991 if _postproc:
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\dill\_dill.py:1117](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/dill/_dill.py#line=1116), in _save_with_postproc(pickler, reduction, is_pickler_dill, obj, postproc_list)
1115 continue
1116 else:
-> 1117 pickler.save_reduce(*reduction)
1118 # pop None created by calling preprocessing step off stack
1119 pickler.write(POP)
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:690](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=689), in _Pickler.save_reduce(self, func, args, state, listitems, dictitems, state_setter, obj)
688 else:
689 save(func)
--> 690 save(args)
691 write(REDUCE)
693 if obj is not None:
694 # If the object is already in the memo, this means it is
695 # recursive. In this case, throw away everything we put on the
696 # stack, and fetch the object back from the memo.
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\datasets\utils\_dill.py:70](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/datasets/utils/_dill.py#line=69), in Pickler.save(self, obj, save_persistent_id)
68 if obj_type is FunctionType:
69 obj = getattr(obj, "_torchdynamo_orig_callable", obj)
---> 70 dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\dill\_dill.py:414](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/dill/_dill.py#line=413), in Pickler.save(self, obj, save_persistent_id)
412 msg = "Can't pickle %s: attribute lookup builtins.generator failed" % GeneratorType
413 raise PicklingError(msg)
--> 414 StockPickler.save(self, obj, save_persistent_id)
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:558](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=557), in _Pickler.save(self, obj, save_persistent_id)
556 f = self.dispatch.get(t)
557 if f is not None:
--> 558 f(self, obj) # Call unbound method with explicit self
559 return
561 # Check private dispatch table if any, or else
562 # copyreg.dispatch_table
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:905](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=904), in _Pickler.save_tuple(self, obj)
903 if n <= 3 and self.proto >= 2:
904 for element in obj:
--> 905 save(element)
906 # Subtle. Same as in the big comment below.
907 if id(obj) in memo:
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\datasets\utils\_dill.py:70](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/datasets/utils/_dill.py#line=69), in Pickler.save(self, obj, save_persistent_id)
68 if obj_type is FunctionType:
69 obj = getattr(obj, "_torchdynamo_orig_callable", obj)
---> 70 dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\dill\_dill.py:414](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/dill/_dill.py#line=413), in Pickler.save(self, obj, save_persistent_id)
412 msg = "Can't pickle %s: attribute lookup builtins.generator failed" % GeneratorType
413 raise PicklingError(msg)
--> 414 StockPickler.save(self, obj, save_persistent_id)
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:601](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=600), in _Pickler.save(self, obj, save_persistent_id)
597 raise PicklingError("Tuple returned by %s must have "
598 "two to six elements" % reduce)
600 # Save the reduce() output and finally memoize the object
--> 601 self.save_reduce(obj=obj, *rv)
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:715](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=714), in _Pickler.save_reduce(self, func, args, state, listitems, dictitems, state_setter, obj)
713 if state is not None:
714 if state_setter is None:
--> 715 save(state)
716 write(BUILD)
717 else:
718 # If a state_setter is specified, call it instead of load_build
719 # to update obj's with its previous state.
720 # First, push state_setter and its tuple of expected arguments
721 # (obj, state) onto the stack.
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\datasets\utils\_dill.py:70](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/datasets/utils/_dill.py#line=69), in Pickler.save(self, obj, save_persistent_id)
68 if obj_type is FunctionType:
69 obj = getattr(obj, "_torchdynamo_orig_callable", obj)
---> 70 dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\dill\_dill.py:414](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/dill/_dill.py#line=413), in Pickler.save(self, obj, save_persistent_id)
412 msg = "Can't pickle %s: attribute lookup builtins.generator failed" % GeneratorType
413 raise PicklingError(msg)
--> 414 StockPickler.save(self, obj, save_persistent_id)
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:558](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=557), in _Pickler.save(self, obj, save_persistent_id)
556 f = self.dispatch.get(t)
557 if f is not None:
--> 558 f(self, obj) # Call unbound method with explicit self
559 return
561 # Check private dispatch table if any, or else
562 # copyreg.dispatch_table
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\dill\_dill.py:1217](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/dill/_dill.py#line=1216), in save_module_dict(pickler, obj)
1214 if is_dill(pickler, child=False) and pickler._session:
1215 # we only care about session the first pass thru
1216 pickler._first_pass = False
-> 1217 StockPickler.save_dict(pickler, obj)
1218 logger.trace(pickler, "# D2")
1219 return
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:990](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=989), in _Pickler.save_dict(self, obj)
987 self.write(MARK + DICT)
989 self.memoize(obj)
--> 990 self._batch_setitems(obj.items())
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\datasets\utils\_dill.py:83](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/datasets/utils/_dill.py#line=82), in Pickler._batch_setitems(self, items)
80 from datasets.fingerprint import Hasher
82 items = sorted(items, key=lambda x: Hasher.hash(x[0]))
---> 83 dill.Pickler._batch_setitems(self, items)
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:1014](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=1013), in _Pickler._batch_setitems(self, items)
1012 for k, v in tmp:
1013 save(k)
-> 1014 save(v)
1015 write(SETITEMS)
1016 elif n:
[... skipping similar frames: Pickler.save at line 70 (1 times), Pickler.save at line 414 (1 times)]
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:601](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=600), in _Pickler.save(self, obj, save_persistent_id)
597 raise PicklingError("Tuple returned by %s must have "
598 "two to six elements" % reduce)
600 # Save the reduce() output and finally memoize the object
--> 601 self.save_reduce(obj=obj, *rv)
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:715](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=714), in _Pickler.save_reduce(self, func, args, state, listitems, dictitems, state_setter, obj)
713 if state is not None:
714 if state_setter is None:
--> 715 save(state)
716 write(BUILD)
717 else:
718 # If a state_setter is specified, call it instead of load_build
719 # to update obj's with its previous state.
720 # First, push state_setter and its tuple of expected arguments
721 # (obj, state) onto the stack.
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\datasets\utils\_dill.py:70](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/datasets/utils/_dill.py#line=69), in Pickler.save(self, obj, save_persistent_id)
68 if obj_type is FunctionType:
69 obj = getattr(obj, "_torchdynamo_orig_callable", obj)
---> 70 dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\dill\_dill.py:414](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/dill/_dill.py#line=413), in Pickler.save(self, obj, save_persistent_id)
412 msg = "Can't pickle %s: attribute lookup builtins.generator failed" % GeneratorType
413 raise PicklingError(msg)
--> 414 StockPickler.save(self, obj, save_persistent_id)
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:558](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=557), in _Pickler.save(self, obj, save_persistent_id)
556 f = self.dispatch.get(t)
557 if f is not None:
--> 558 f(self, obj) # Call unbound method with explicit self
559 return
561 # Check private dispatch table if any, or else
562 # copyreg.dispatch_table
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:905](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=904), in _Pickler.save_tuple(self, obj)
903 if n <= 3 and self.proto >= 2:
904 for element in obj:
--> 905 save(element)
906 # Subtle. Same as in the big comment below.
907 if id(obj) in memo:
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\datasets\utils\_dill.py:70](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/datasets/utils/_dill.py#line=69), in Pickler.save(self, obj, save_persistent_id)
68 if obj_type is FunctionType:
69 obj = getattr(obj, "_torchdynamo_orig_callable", obj)
---> 70 dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\dill\_dill.py:414](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/dill/_dill.py#line=413), in Pickler.save(self, obj, save_persistent_id)
412 msg = "Can't pickle %s: attribute lookup builtins.generator failed" % GeneratorType
413 raise PicklingError(msg)
--> 414 StockPickler.save(self, obj, save_persistent_id)
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:558](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=557), in _Pickler.save(self, obj, save_persistent_id)
556 f = self.dispatch.get(t)
557 if f is not None:
--> 558 f(self, obj) # Call unbound method with explicit self
559 return
561 # Check private dispatch table if any, or else
562 # copyreg.dispatch_table
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\dill\_dill.py:1217](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/dill/_dill.py#line=1216), in save_module_dict(pickler, obj)
1214 if is_dill(pickler, child=False) and pickler._session:
1215 # we only care about session the first pass thru
1216 pickler._first_pass = False
-> 1217 StockPickler.save_dict(pickler, obj)
1218 logger.trace(pickler, "# D2")
1219 return
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:990](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=989), in _Pickler.save_dict(self, obj)
987 self.write(MARK + DICT)
989 self.memoize(obj)
--> 990 self._batch_setitems(obj.items())
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\datasets\utils\_dill.py:83](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/datasets/utils/_dill.py#line=82), in Pickler._batch_setitems(self, items)
80 from datasets.fingerprint import Hasher
82 items = sorted(items, key=lambda x: Hasher.hash(x[0]))
---> 83 dill.Pickler._batch_setitems(self, items)
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:1014](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=1013), in _Pickler._batch_setitems(self, items)
1012 for k, v in tmp:
1013 save(k)
-> 1014 save(v)
1015 write(SETITEMS)
1016 elif n:
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\datasets\utils\_dill.py:70](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/datasets/utils/_dill.py#line=69), in Pickler.save(self, obj, save_persistent_id)
68 if obj_type is FunctionType:
69 obj = getattr(obj, "_torchdynamo_orig_callable", obj)
---> 70 dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\dill\_dill.py:414](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/dill/_dill.py#line=413), in Pickler.save(self, obj, save_persistent_id)
412 msg = "Can't pickle %s: attribute lookup builtins.generator failed" % GeneratorType
413 raise PicklingError(msg)
--> 414 StockPickler.save(self, obj, save_persistent_id)
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:601](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=600), in _Pickler.save(self, obj, save_persistent_id)
597 raise PicklingError("Tuple returned by %s must have "
598 "two to six elements" % reduce)
600 # Save the reduce() output and finally memoize the object
--> 601 self.save_reduce(obj=obj, *rv)
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:715](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=714), in _Pickler.save_reduce(self, func, args, state, listitems, dictitems, state_setter, obj)
713 if state is not None:
714 if state_setter is None:
--> 715 save(state)
716 write(BUILD)
717 else:
718 # If a state_setter is specified, call it instead of load_build
719 # to update obj's with its previous state.
720 # First, push state_setter and its tuple of expected arguments
721 # (obj, state) onto the stack.
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\datasets\utils\_dill.py:70](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/datasets/utils/_dill.py#line=69), in Pickler.save(self, obj, save_persistent_id)
68 if obj_type is FunctionType:
69 obj = getattr(obj, "_torchdynamo_orig_callable", obj)
---> 70 dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\dill\_dill.py:414](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/dill/_dill.py#line=413), in Pickler.save(self, obj, save_persistent_id)
412 msg = "Can't pickle %s: attribute lookup builtins.generator failed" % GeneratorType
413 raise PicklingError(msg)
--> 414 StockPickler.save(self, obj, save_persistent_id)
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:558](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=557), in _Pickler.save(self, obj, save_persistent_id)
556 f = self.dispatch.get(t)
557 if f is not None:
--> 558 f(self, obj) # Call unbound method with explicit self
559 return
561 # Check private dispatch table if any, or else
562 # copyreg.dispatch_table
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:905](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=904), in _Pickler.save_tuple(self, obj)
903 if n <= 3 and self.proto >= 2:
904 for element in obj:
--> 905 save(element)
906 # Subtle. Same as in the big comment below.
907 if id(obj) in memo:
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\datasets\utils\_dill.py:70](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/datasets/utils/_dill.py#line=69), in Pickler.save(self, obj, save_persistent_id)
68 if obj_type is FunctionType:
69 obj = getattr(obj, "_torchdynamo_orig_callable", obj)
---> 70 dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\dill\_dill.py:414](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/dill/_dill.py#line=413), in Pickler.save(self, obj, save_persistent_id)
412 msg = "Can't pickle %s: attribute lookup builtins.generator failed" % GeneratorType
413 raise PicklingError(msg)
--> 414 StockPickler.save(self, obj, save_persistent_id)
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:558](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=557), in _Pickler.save(self, obj, save_persistent_id)
556 f = self.dispatch.get(t)
557 if f is not None:
--> 558 f(self, obj) # Call unbound method with explicit self
559 return
561 # Check private dispatch table if any, or else
562 # copyreg.dispatch_table
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\dill\_dill.py:1217](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/dill/_dill.py#line=1216), in save_module_dict(pickler, obj)
1214 if is_dill(pickler, child=False) and pickler._session:
1215 # we only care about session the first pass thru
1216 pickler._first_pass = False
-> 1217 StockPickler.save_dict(pickler, obj)
1218 logger.trace(pickler, "# D2")
1219 return
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:990](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=989), in _Pickler.save_dict(self, obj)
987 self.write(MARK + DICT)
989 self.memoize(obj)
--> 990 self._batch_setitems(obj.items())
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\datasets\utils\_dill.py:83](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/datasets/utils/_dill.py#line=82), in Pickler._batch_setitems(self, items)
80 from datasets.fingerprint import Hasher
82 items = sorted(items, key=lambda x: Hasher.hash(x[0]))
---> 83 dill.Pickler._batch_setitems(self, items)
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:1014](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=1013), in _Pickler._batch_setitems(self, items)
1012 for k, v in tmp:
1013 save(k)
-> 1014 save(v)
1015 write(SETITEMS)
1016 elif n:
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\datasets\utils\_dill.py:70](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/datasets/utils/_dill.py#line=69), in Pickler.save(self, obj, save_persistent_id)
68 if obj_type is FunctionType:
69 obj = getattr(obj, "_torchdynamo_orig_callable", obj)
---> 70 dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\dill\_dill.py:414](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/dill/_dill.py#line=413), in Pickler.save(self, obj, save_persistent_id)
412 msg = "Can't pickle %s: attribute lookup builtins.generator failed" % GeneratorType
413 raise PicklingError(msg)
--> 414 StockPickler.save(self, obj, save_persistent_id)
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:601](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=600), in _Pickler.save(self, obj, save_persistent_id)
597 raise PicklingError("Tuple returned by %s must have "
598 "two to six elements" % reduce)
600 # Save the reduce() output and finally memoize the object
--> 601 self.save_reduce(obj=obj, *rv)
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:690](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=689), in _Pickler.save_reduce(self, func, args, state, listitems, dictitems, state_setter, obj)
688 else:
689 save(func)
--> 690 save(args)
691 write(REDUCE)
693 if obj is not None:
694 # If the object is already in the memo, this means it is
695 # recursive. In this case, throw away everything we put on the
696 # stack, and fetch the object back from the memo.
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\datasets\utils\_dill.py:70](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/datasets/utils/_dill.py#line=69), in Pickler.save(self, obj, save_persistent_id)
68 if obj_type is FunctionType:
69 obj = getattr(obj, "_torchdynamo_orig_callable", obj)
---> 70 dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\dill\_dill.py:414](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/dill/_dill.py#line=413), in Pickler.save(self, obj, save_persistent_id)
412 msg = "Can't pickle %s: attribute lookup builtins.generator failed" % GeneratorType
413 raise PicklingError(msg)
--> 414 StockPickler.save(self, obj, save_persistent_id)
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:558](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=557), in _Pickler.save(self, obj, save_persistent_id)
556 f = self.dispatch.get(t)
557 if f is not None:
--> 558 f(self, obj) # Call unbound method with explicit self
559 return
561 # Check private dispatch table if any, or else
562 # copyreg.dispatch_table
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:920](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=919), in _Pickler.save_tuple(self, obj)
918 write(MARK)
919 for element in obj:
--> 920 save(element)
922 if id(obj) in memo:
923 # Subtle. d was not in memo when we entered save_tuple(), so
924 # the process of saving the tuple's elements must have saved
(...) 928 # could have been done in the "for element" loop instead, but
929 # recursive tuples are a rare thing.
930 get = self.get(memo[id(obj)][0])
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\datasets\utils\_dill.py:70](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/datasets/utils/_dill.py#line=69), in Pickler.save(self, obj, save_persistent_id)
68 if obj_type is FunctionType:
69 obj = getattr(obj, "_torchdynamo_orig_callable", obj)
---> 70 dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\dill\_dill.py:414](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/dill/_dill.py#line=413), in Pickler.save(self, obj, save_persistent_id)
412 msg = "Can't pickle %s: attribute lookup builtins.generator failed" % GeneratorType
413 raise PicklingError(msg)
--> 414 StockPickler.save(self, obj, save_persistent_id)
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:601](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=600), in _Pickler.save(self, obj, save_persistent_id)
597 raise PicklingError("Tuple returned by %s must have "
598 "two to six elements" % reduce)
600 # Save the reduce() output and finally memoize the object
--> 601 self.save_reduce(obj=obj, *rv)
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:715](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=714), in _Pickler.save_reduce(self, func, args, state, listitems, dictitems, state_setter, obj)
713 if state is not None:
714 if state_setter is None:
--> 715 save(state)
716 write(BUILD)
717 else:
718 # If a state_setter is specified, call it instead of load_build
719 # to update obj's with its previous state.
720 # First, push state_setter and its tuple of expected arguments
721 # (obj, state) onto the stack.
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\datasets\utils\_dill.py:70](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/datasets/utils/_dill.py#line=69), in Pickler.save(self, obj, save_persistent_id)
68 if obj_type is FunctionType:
69 obj = getattr(obj, "_torchdynamo_orig_callable", obj)
---> 70 dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\dill\_dill.py:414](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/dill/_dill.py#line=413), in Pickler.save(self, obj, save_persistent_id)
412 msg = "Can't pickle %s: attribute lookup builtins.generator failed" % GeneratorType
413 raise PicklingError(msg)
--> 414 StockPickler.save(self, obj, save_persistent_id)
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:558](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=557), in _Pickler.save(self, obj, save_persistent_id)
556 f = self.dispatch.get(t)
557 if f is not None:
--> 558 f(self, obj) # Call unbound method with explicit self
559 return
561 # Check private dispatch table if any, or else
562 # copyreg.dispatch_table
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\dill\_dill.py:1217](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/dill/_dill.py#line=1216), in save_module_dict(pickler, obj)
1214 if is_dill(pickler, child=False) and pickler._session:
1215 # we only care about session the first pass thru
1216 pickler._first_pass = False
-> 1217 StockPickler.save_dict(pickler, obj)
1218 logger.trace(pickler, "# D2")
1219 return
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:990](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=989), in _Pickler.save_dict(self, obj)
987 self.write(MARK + DICT)
989 self.memoize(obj)
--> 990 self._batch_setitems(obj.items())
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\datasets\utils\_dill.py:83](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/datasets/utils/_dill.py#line=82), in Pickler._batch_setitems(self, items)
80 from datasets.fingerprint import Hasher
82 items = sorted(items, key=lambda x: Hasher.hash(x[0]))
---> 83 dill.Pickler._batch_setitems(self, items)
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:1014](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=1013), in _Pickler._batch_setitems(self, items)
1012 for k, v in tmp:
1013 save(k)
-> 1014 save(v)
1015 write(SETITEMS)
1016 elif n:
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\datasets\utils\_dill.py:70](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/datasets/utils/_dill.py#line=69), in Pickler.save(self, obj, save_persistent_id)
68 if obj_type is FunctionType:
69 obj = getattr(obj, "_torchdynamo_orig_callable", obj)
---> 70 dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\dill\_dill.py:414](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/dill/_dill.py#line=413), in Pickler.save(self, obj, save_persistent_id)
412 msg = "Can't pickle %s: attribute lookup builtins.generator failed" % GeneratorType
413 raise PicklingError(msg)
--> 414 StockPickler.save(self, obj, save_persistent_id)
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:558](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=557), in _Pickler.save(self, obj, save_persistent_id)
556 f = self.dispatch.get(t)
557 if f is not None:
--> 558 f(self, obj) # Call unbound method with explicit self
559 return
561 # Check private dispatch table if any, or else
562 # copyreg.dispatch_table
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\dill\_dill.py:1217](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/dill/_dill.py#line=1216), in save_module_dict(pickler, obj)
1214 if is_dill(pickler, child=False) and pickler._session:
1215 # we only care about session the first pass thru
1216 pickler._first_pass = False
-> 1217 StockPickler.save_dict(pickler, obj)
1218 logger.trace(pickler, "# D2")
1219 return
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:990](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=989), in _Pickler.save_dict(self, obj)
987 self.write(MARK + DICT)
989 self.memoize(obj)
--> 990 self._batch_setitems(obj.items())
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\datasets\utils\_dill.py:83](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/datasets/utils/_dill.py#line=82), in Pickler._batch_setitems(self, items)
80 from datasets.fingerprint import Hasher
82 items = sorted(items, key=lambda x: Hasher.hash(x[0]))
---> 83 dill.Pickler._batch_setitems(self, items)
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:1019](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=1018), in _Pickler._batch_setitems(self, items)
1017 k, v = tmp[0]
1018 save(k)
-> 1019 save(v)
1020 write(SETITEM)
1021 # else tmp is empty, and we're done
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\datasets\utils\_dill.py:70](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/datasets/utils/_dill.py#line=69), in Pickler.save(self, obj, save_persistent_id)
68 if obj_type is FunctionType:
69 obj = getattr(obj, "_torchdynamo_orig_callable", obj)
---> 70 dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\dill\_dill.py:414](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/dill/_dill.py#line=413), in Pickler.save(self, obj, save_persistent_id)
412 msg = "Can't pickle %s: attribute lookup builtins.generator failed" % GeneratorType
413 raise PicklingError(msg)
--> 414 StockPickler.save(self, obj, save_persistent_id)
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:601](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=600), in _Pickler.save(self, obj, save_persistent_id)
597 raise PicklingError("Tuple returned by %s must have "
598 "two to six elements" % reduce)
600 # Save the reduce() output and finally memoize the object
--> 601 self.save_reduce(obj=obj, *rv)
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:715](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=714), in _Pickler.save_reduce(self, func, args, state, listitems, dictitems, state_setter, obj)
713 if state is not None:
714 if state_setter is None:
--> 715 save(state)
716 write(BUILD)
717 else:
718 # If a state_setter is specified, call it instead of load_build
719 # to update obj's with its previous state.
720 # First, push state_setter and its tuple of expected arguments
721 # (obj, state) onto the stack.
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\datasets\utils\_dill.py:70](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/datasets/utils/_dill.py#line=69), in Pickler.save(self, obj, save_persistent_id)
68 if obj_type is FunctionType:
69 obj = getattr(obj, "_torchdynamo_orig_callable", obj)
---> 70 dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\dill\_dill.py:414](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/dill/_dill.py#line=413), in Pickler.save(self, obj, save_persistent_id)
412 msg = "Can't pickle %s: attribute lookup builtins.generator failed" % GeneratorType
413 raise PicklingError(msg)
--> 414 StockPickler.save(self, obj, save_persistent_id)
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:558](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=557), in _Pickler.save(self, obj, save_persistent_id)
556 f = self.dispatch.get(t)
557 if f is not None:
--> 558 f(self, obj) # Call unbound method with explicit self
559 return
561 # Check private dispatch table if any, or else
562 # copyreg.dispatch_table
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\dill\_dill.py:1217](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/dill/_dill.py#line=1216), in save_module_dict(pickler, obj)
1214 if is_dill(pickler, child=False) and pickler._session:
1215 # we only care about session the first pass thru
1216 pickler._first_pass = False
-> 1217 StockPickler.save_dict(pickler, obj)
1218 logger.trace(pickler, "# D2")
1219 return
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:990](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=989), in _Pickler.save_dict(self, obj)
987 self.write(MARK + DICT)
989 self.memoize(obj)
--> 990 self._batch_setitems(obj.items())
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\datasets\utils\_dill.py:83](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/datasets/utils/_dill.py#line=82), in Pickler._batch_setitems(self, items)
80 from datasets.fingerprint import Hasher
82 items = sorted(items, key=lambda x: Hasher.hash(x[0]))
---> 83 dill.Pickler._batch_setitems(self, items)
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:1014](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=1013), in _Pickler._batch_setitems(self, items)
1012 for k, v in tmp:
1013 save(k)
-> 1014 save(v)
1015 write(SETITEMS)
1016 elif n:
[... skipping similar frames: Pickler.save at line 70 (1 times), Pickler.save at line 414 (1 times)]
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:558](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=557), in _Pickler.save(self, obj, save_persistent_id)
556 f = self.dispatch.get(t)
557 if f is not None:
--> 558 f(self, obj) # Call unbound method with explicit self
559 return
561 # Check private dispatch table if any, or else
562 # copyreg.dispatch_table
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\dill\_dill.py:1217](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/dill/_dill.py#line=1216), in save_module_dict(pickler, obj)
1214 if is_dill(pickler, child=False) and pickler._session:
1215 # we only care about session the first pass thru
1216 pickler._first_pass = False
-> 1217 StockPickler.save_dict(pickler, obj)
1218 logger.trace(pickler, "# D2")
1219 return
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:990](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=989), in _Pickler.save_dict(self, obj)
987 self.write(MARK + DICT)
989 self.memoize(obj)
--> 990 self._batch_setitems(obj.items())
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\datasets\utils\_dill.py:83](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/datasets/utils/_dill.py#line=82), in Pickler._batch_setitems(self, items)
80 from datasets.fingerprint import Hasher
82 items = sorted(items, key=lambda x: Hasher.hash(x[0]))
---> 83 dill.Pickler._batch_setitems(self, items)
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:1014](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=1013), in _Pickler._batch_setitems(self, items)
1012 for k, v in tmp:
1013 save(k)
-> 1014 save(v)
1015 write(SETITEMS)
1016 elif n:
[... skipping similar frames: Pickler.save at line 70 (1 times), Pickler.save at line 414 (1 times)]
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:601](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=600), in _Pickler.save(self, obj, save_persistent_id)
597 raise PicklingError("Tuple returned by %s must have "
598 "two to six elements" % reduce)
600 # Save the reduce() output and finally memoize the object
--> 601 self.save_reduce(obj=obj, *rv)
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:715](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=714), in _Pickler.save_reduce(self, func, args, state, listitems, dictitems, state_setter, obj)
713 if state is not None:
714 if state_setter is None:
--> 715 save(state)
716 write(BUILD)
717 else:
718 # If a state_setter is specified, call it instead of load_build
719 # to update obj's with its previous state.
720 # First, push state_setter and its tuple of expected arguments
721 # (obj, state) onto the stack.
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\datasets\utils\_dill.py:70](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/datasets/utils/_dill.py#line=69), in Pickler.save(self, obj, save_persistent_id)
68 if obj_type is FunctionType:
69 obj = getattr(obj, "_torchdynamo_orig_callable", obj)
---> 70 dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\dill\_dill.py:414](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/dill/_dill.py#line=413), in Pickler.save(self, obj, save_persistent_id)
412 msg = "Can't pickle %s: attribute lookup builtins.generator failed" % GeneratorType
413 raise PicklingError(msg)
--> 414 StockPickler.save(self, obj, save_persistent_id)
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:558](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=557), in _Pickler.save(self, obj, save_persistent_id)
556 f = self.dispatch.get(t)
557 if f is not None:
--> 558 f(self, obj) # Call unbound method with explicit self
559 return
561 # Check private dispatch table if any, or else
562 # copyreg.dispatch_table
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:920](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=919), in _Pickler.save_tuple(self, obj)
918 write(MARK)
919 for element in obj:
--> 920 save(element)
922 if id(obj) in memo:
923 # Subtle. d was not in memo when we entered save_tuple(), so
924 # the process of saving the tuple's elements must have saved
(...) 928 # could have been done in the "for element" loop instead, but
929 # recursive tuples are a rare thing.
930 get = self.get(memo[id(obj)][0])
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\datasets\utils\_dill.py:70](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/datasets/utils/_dill.py#line=69), in Pickler.save(self, obj, save_persistent_id)
68 if obj_type is FunctionType:
69 obj = getattr(obj, "_torchdynamo_orig_callable", obj)
---> 70 dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File [C:\ProgramData\miniforge3\envs\geo\Lib\site-packages\dill\_dill.py:414](file:///C:/ProgramData/miniforge3/envs/geo/Lib/site-packages/dill/_dill.py#line=413), in Pickler.save(self, obj, save_persistent_id)
412 msg = "Can't pickle %s: attribute lookup builtins.generator failed" % GeneratorType
413 raise PicklingError(msg)
--> 414 StockPickler.save(self, obj, save_persistent_id)
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:558](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=557), in _Pickler.save(self, obj, save_persistent_id)
556 f = self.dispatch.get(t)
557 if f is not None:
--> 558 f(self, obj) # Call unbound method with explicit self
559 return
561 # Check private dispatch table if any, or else
562 # copyreg.dispatch_table
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:809](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=808), in _Pickler.save_bytes(self, obj)
806 self.save_reduce(codecs.encode,
807 (str(obj, 'latin1'), 'latin1'), obj=obj)
808 return
--> 809 self._save_bytes_no_memo(obj)
810 self.memoize(obj)
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:797](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=796), in _Pickler._save_bytes_no_memo(self, obj)
795 self._write_large_bytes(BINBYTES8 + pack("<Q", n), obj)
796 elif n >= self.framer._FRAME_SIZE_TARGET:
--> 797 self._write_large_bytes(BINBYTES + pack("<I", n), obj)
798 else:
799 self.write(BINBYTES + pack("<I", n) + obj)
File [C:\ProgramData\miniforge3\envs\geo\Lib\pickle.py:254](file:///C:/ProgramData/miniforge3/envs/geo/Lib/pickle.py#line=253), in _Framer.write_large_bytes(self, header, payload)
247 # Perform direct write of the header and payload of the large binary
248 # object. Be careful not to concatenate the header and the payload
249 # prior to calling 'write' as we do not want to allocate a large
250 # temporary bytes object.
251 # We intentionally do not insert a protocol 4 frame opcode to make
252 # it possible to optimize file.read calls in the loader.
253 write(header)
--> 254 write(payload)
MemoryError:
```
</details>
Memory error is an expected type of error in such case, but when I started digging down, I found out that it occurs in a kinda unexpected place - in `create_config_id` function. It tries to hash `config_kwargs_to_add_to_suffix`, including generator function itself.
I modified the `BuilderConfig.create_config_id` code like this to check which values are hashed and how much time it takes to hash them and ran it on a toy dataset:
```
print(config_kwargs_to_add_to_suffix)
start_time = time.time()
if all(isinstance(v, (str, bool, int, float)) for v in config_kwargs_to_add_to_suffix.values()):
suffix = ",".join(
str(k) + "=" + urllib.parse.quote_plus(str(v)) for k, v in config_kwargs_to_add_to_suffix.items()
)
if len(suffix) > 32: # hash if too long
suffix = Hasher.hash(config_kwargs_to_add_to_suffix)
else:
suffix = Hasher.hash(config_kwargs_to_add_to_suffix)
end_time = time.time()
print(f"Execution time: {end_time - start_time:.4f} seconds")
print(suffix)
```
In my case the content of `config_kwargs_to_add_to_suffix` was like this:
```
{'features': {'key': Value(dtype='int64', id=None), 'x': Array3D(shape=(44, 128, 128), dtype='float32', id=None), 'y_class': Array2D(shape=(128, 128), dtype='int32', id=None)}, 'gen_kwargs': None, 'generator': <function generate_tiles.<locals>.dataset_generator at 0x00000139D10D7920>, 'split': NamedSplit('train')}
```
Also I noticed that hashing took a significant amount of time - 43.1482 seconds, while the overall function execution (with data loading, batching and saving dataset) took 2min 45s. The output of `create_config_id` is just a dataset id, so, it is inappropirately large amount of time.
But when I added `config_kwargs_to_add_to_suffix.pop("generator", None)`, the hashing took only 0.0060 seconds.
Maybe we shouldn't hash the generator function, as it can be really computationally and memory expensive.
### Steps to reproduce the bug
This is a simplified example of a workflow I used to generate dataset. But I think that you can use almost any workflow to reproduce that bug.
```
import pystac
import pystac_client
import planetary_computer
import numpy as np
import xarray as xr
import rioxarray as rxr
import dask
import xbatcher
import datasets
# Loading a dataset, in our case - single Landsat image
catalog = pystac_client.Client.open(
"https://planetarycomputer.microsoft.com/api/stac/v1",
modifier=planetary_computer.sign_inplace,
)
brazil = [-60.2, -3.31]
time_of_interest = "2021-06-01/2021-08-31"
search = catalog.search(collections=["landsat-c2-l2"], intersects={"type": "Point", "coordinates": brazil}, datetime=time_of_interest)
items = search.item_collection()
item = min(items, key=lambda item: pystac.extensions.eo.EOExtension.ext(item).cloud_cover)
# Getting x data
bands = []
for band in ["red", "green", "blue", "nir08", "coastal", "swir16", "swir22", "lwir11"]:
with rxr.open_rasterio(item.assets[band].href, chunks=True, lock=True) as raster:
raster = raster.to_dataset('band')
#print(raster)
raster = raster.rename({1: band})
bands.append(raster)
x = xr.merge(bands).squeeze().to_array("band").persist()
# Getting y data
with rxr.open_rasterio(item.assets['qa_pixel'].href, chunks=True, lock=True) as raster:
y = raster.squeeze().persist()
# Setting up batches generators
x_batches = xbatcher.BatchGenerator(ds=x, input_dims={"x": 256, "y": 256})
y_batches = xbatcher.BatchGenerator(ds=y, input_dims={"x": 256, "y": 256})
# Filtering samples that contain only nodata
samples = list(range(len(x_batches)))
samples_filtered = []
for i in samples:
if not np.array_equal(np.unique(x_batches[i]), np.array([0.])) and not np.array_equal(np.unique(y_batches[i]), np.array([0])):
samples_filtered.append(i)
samples = samples_filtered
np.random.shuffle(samples)
# Setting up features
feat = {
"key": datasets.Value(dtype="int64"),
"x": datasets.Array3D(dtype="float32", shape=(4, 256, 256)),
"y": datasets.Array2D(dtype="int32", shape=(256, 256))
}
feat = datasets.Features(feat)
# Setting up a generator
def dataset_generator():
for index in samples:
data_dict = {
"key": index,
"x": x_batches[index].data,
"y": y_batches[index].data,
}
yield data_dict
# Create dataset
ds = datasets.Dataset.from_generator(
dataset_generator,
features=feat,
cache_dir="temp/cache",
)
```
Please, try adding `config_kwargs_to_add_to_suffix.pop("generator", None)` to `BuilderConfig.create_config_id` and then measuring how much time it takes to run
```
if all(isinstance(v, (str, bool, int, float)) for v in config_kwargs_to_add_to_suffix.values()):
suffix = ",".join(
str(k) + "=" + urllib.parse.quote_plus(str(v)) for k, v in config_kwargs_to_add_to_suffix.items()
)
if len(suffix) > 32: # hash if too long
suffix = Hasher.hash(config_kwargs_to_add_to_suffix)
else:
suffix = Hasher.hash(config_kwargs_to_add_to_suffix)
```
code block with and without `config_kwargs_to_add_to_suffix.pop("generator", None)`
In my case the difference was 3.3828 seconds without popping generator function and 0.0010 seconds with popping.
### Expected behavior
Much faster hashing and no MemoryErrors
### Environment info
- `datasets` version: 3.5.0
- Platform: Windows-11-10.0.26100-SP0
- Python version: 3.12.9
- `huggingface_hub` version: 0.30.1
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.12.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7513/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7513/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6575
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6575/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6575/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6575/events
|
https://github.com/huggingface/datasets/pull/6575
| 2,072,617,406
|
PR_kwDODunzps5jl1V6
| 6,575
|
[IterableDataset] Fix `drop_last_batch`in map after shuffling or sharding
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6575). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005095 / 0.011353 (-0.006257) | 0.003531 / 0.011008 (-0.007478) | 0.063634 / 0.038508 (0.025126) | 0.031187 / 0.023109 (0.008078) | 0.246375 / 0.275898 (-0.029523) | 0.261204 / 0.323480 (-0.062276) | 0.002898 / 0.007986 (-0.005088) | 0.003280 / 0.004328 (-0.001049) | 0.050739 / 0.004250 (0.046488) | 0.042905 / 0.037052 (0.005852) | 0.244506 / 0.258489 (-0.013983) | 0.269403 / 0.293841 (-0.024438) | 0.027588 / 0.128546 (-0.100959) | 0.010860 / 0.075646 (-0.064787) | 0.208332 / 0.419271 (-0.210939) | 0.035762 / 0.043533 (-0.007771) | 0.244448 / 0.255139 (-0.010691) | 0.278464 / 0.283200 (-0.004735) | 0.019839 / 0.141683 (-0.121844) | 1.145340 / 1.452155 (-0.306815) | 1.173240 / 1.492716 (-0.319476) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090472 / 0.018006 (0.072466) | 0.300883 / 0.000490 (0.300394) | 0.000202 / 0.000200 (0.000003) | 0.000049 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017884 / 0.037411 (-0.019527) | 0.060629 / 0.014526 (0.046103) | 0.073157 / 0.176557 (-0.103400) | 0.120065 / 0.737135 (-0.617070) | 0.074519 / 0.296338 (-0.221820) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289586 / 0.215209 (0.074377) | 2.821042 / 2.077655 (0.743387) | 1.515515 / 1.504120 (0.011395) | 1.390569 / 1.541195 (-0.150625) | 1.433238 / 1.468490 (-0.035252) | 0.567357 / 4.584777 (-4.017420) | 2.345483 / 3.745712 (-1.400229) | 2.803964 / 5.269862 (-2.465898) | 1.775343 / 4.565676 (-2.790334) | 0.063186 / 0.424275 (-0.361089) | 0.005013 / 0.007607 (-0.002594) | 0.335607 / 0.226044 (0.109562) | 3.307071 / 2.268929 (1.038143) | 1.875228 / 55.444624 (-53.569396) | 1.618286 / 6.876477 (-5.258191) | 1.615963 / 2.142072 (-0.526109) | 0.642633 / 4.805227 (-4.162594) | 0.117222 / 6.500664 (-6.383443) | 0.042590 / 0.075469 (-0.032879) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.960724 / 1.841788 (-0.881064) | 11.652978 / 8.074308 (3.578670) | 10.069318 / 10.191392 (-0.122074) | 0.128161 / 0.680424 (-0.552263) | 0.014095 / 0.534201 (-0.520106) | 0.288386 / 0.579283 (-0.290897) | 0.260373 / 0.434364 (-0.173991) | 0.327443 / 0.540337 (-0.212894) | 0.419020 / 1.386936 (-0.967916) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005018 / 0.011353 (-0.006335) | 0.003503 / 0.011008 (-0.007505) | 0.049718 / 0.038508 (0.011210) | 0.029311 / 0.023109 (0.006202) | 0.271097 / 0.275898 (-0.004801) | 0.297370 / 0.323480 (-0.026110) | 0.004230 / 0.007986 (-0.003755) | 0.002741 / 0.004328 (-0.001587) | 0.049686 / 0.004250 (0.045435) | 0.044171 / 0.037052 (0.007119) | 0.274851 / 0.258489 (0.016362) | 0.309554 / 0.293841 (0.015714) | 0.029488 / 0.128546 (-0.099058) | 0.010767 / 0.075646 (-0.064880) | 0.057739 / 0.419271 (-0.361532) | 0.053319 / 0.043533 (0.009786) | 0.277739 / 0.255139 (0.022600) | 0.291341 / 0.283200 (0.008142) | 0.019587 / 0.141683 (-0.122096) | 1.113823 / 1.452155 (-0.338332) | 1.169409 / 1.492716 (-0.323307) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091889 / 0.018006 (0.073883) | 0.309162 / 0.000490 (0.308672) | 0.000222 / 0.000200 (0.000022) | 0.000055 / 0.000054 (0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022202 / 0.037411 (-0.015209) | 0.076113 / 0.014526 (0.061587) | 0.088416 / 0.176557 (-0.088141) | 0.126822 / 0.737135 (-0.610314) | 0.089540 / 0.296338 (-0.206798) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293697 / 0.215209 (0.078487) | 2.880680 / 2.077655 (0.803026) | 1.580122 / 1.504120 (0.076002) | 1.449492 / 1.541195 (-0.091703) | 1.478900 / 1.468490 (0.010410) | 0.563402 / 4.584777 (-4.021375) | 2.408692 / 3.745712 (-1.337020) | 2.794108 / 5.269862 (-2.475754) | 1.728549 / 4.565676 (-2.837128) | 0.063152 / 0.424275 (-0.361123) | 0.004985 / 0.007607 (-0.002622) | 0.343340 / 0.226044 (0.117295) | 3.426454 / 2.268929 (1.157525) | 1.932918 / 55.444624 (-53.511706) | 1.649533 / 6.876477 (-5.226944) | 1.673416 / 2.142072 (-0.468656) | 0.640000 / 4.805227 (-4.165227) | 0.115501 / 6.500664 (-6.385163) | 0.040756 / 0.075469 (-0.034713) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.992468 / 1.841788 (-0.849319) | 12.392072 / 8.074308 (4.317764) | 11.025362 / 10.191392 (0.833970) | 0.130788 / 0.680424 (-0.549635) | 0.015647 / 0.534201 (-0.518554) | 0.285914 / 0.579283 (-0.293369) | 0.277208 / 0.434364 (-0.157156) | 0.322917 / 0.540337 (-0.217420) | 0.427308 / 1.386936 (-0.959628) |\n\n</details>\n</details>\n\n\n"
] | 2024-01-09T15:35:31Z
| 2024-01-11T16:16:54Z
| 2024-01-11T16:10:30Z
|
MEMBER
| null | null | null |
It was not taken into account e.g. when passing to a DataLoader with num_workers>0
Fix https://github.com/huggingface/datasets/issues/6565
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6575/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6575/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6575.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6575",
"merged_at": "2024-01-11T16:10:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6575.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6575"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5035
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5035/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5035/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5035/events
|
https://github.com/huggingface/datasets/pull/5035
| 1,388,914,476
|
PR_kwDODunzps4_wVie
| 5,035
|
Fix typos in load docstrings and comments
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-28T08:05:07Z
| 2022-09-28T17:28:40Z
| 2022-09-28T17:26:15Z
|
MEMBER
| null | null | null |
Minor fix of typos in load docstrings and comments
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5035/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5035/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5035.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5035",
"merged_at": "2022-09-28T17:26:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5035.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5035"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6645
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6645/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6645/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6645/events
|
https://github.com/huggingface/datasets/issues/6645
| 2,122,956,818
|
I_kwDODunzps5-icAS
| 6,645
|
Support fsspec 2024.2
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] | null |
[
"I'd be very grateful. This upper bound banished me straight into dependency hell today. :("
] | 2024-02-07T12:45:29Z
| 2024-02-29T15:12:19Z
| 2024-02-29T15:12:19Z
|
MEMBER
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
Support fsspec 2024.2.
First, we should address:
- #6644
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 8,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 8,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6645/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6645/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6490
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6490/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6490/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6490/events
|
https://github.com/huggingface/datasets/issues/6490
| 2,037,204,892
|
I_kwDODunzps55bUec
| 6,490
|
`load_dataset(...,save_infos=True)` not working without loading script
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/114978051?v=4",
"events_url": "https://api.github.com/users/morganveyret/events{/privacy}",
"followers_url": "https://api.github.com/users/morganveyret/followers",
"following_url": "https://api.github.com/users/morganveyret/following{/other_user}",
"gists_url": "https://api.github.com/users/morganveyret/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/morganveyret",
"id": 114978051,
"login": "morganveyret",
"node_id": "U_kgDOBtptAw",
"organizations_url": "https://api.github.com/users/morganveyret/orgs",
"received_events_url": "https://api.github.com/users/morganveyret/received_events",
"repos_url": "https://api.github.com/users/morganveyret/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/morganveyret/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/morganveyret/subscriptions",
"type": "User",
"url": "https://api.github.com/users/morganveyret",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Also, once the README.md exists in the python environment it is used when loading another dataset in the same format (e.g. json) since it always resolves the path to the same directory.\r\nThe consequence here is any other dataset won't load because of infos mismatch.\r\nTo reproduce this aspect:\r\n1. Do a `load_datasets(...,save_infos=True)` with one dataset without a loading script\r\n2. Try to load another dataset without a loading script in the same format (e.g. json) but with a different schema "
] | 2023-12-12T08:09:18Z
| 2023-12-12T08:36:22Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
It seems that saving a dataset infos back into the card file is not working for datasets without a loading script.
After tracking the problem a bit it looks like saving the infos uses `Builder.get_imported_module_dir()` as its destination directory.
Internally this is a call to `inspect.getfile()` but since the actual builder class used is dynamically created (cf. `datasets.load.configure_builder_class`) this method actually return te path to the parent builder class (e.g. `datasets.packaged_modules.json.JSON`).
### Steps to reproduce the bug
1. Have a local dataset without any loading script
2. Make sure there are no dataset infos in the README.md
3. Load with `save_infos=True`
4. No change in the dataset README.md
5. A new README.md file is created in the directory of the parent builder class (e.g. for json in `.../site-packages/datasets/packaged_modules/json/README.md`)
### Expected behavior
The dataset README.md should be updated and no file should be created in the python environment.
### Environment info
- `datasets` version: 2.15.0
- Platform: Linux-6.2.0-37-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.19.4
- PyArrow version: 14.0.1
- Pandas version: 2.1.3
- `fsspec` version: 2023.6.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6490/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6490/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5365
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5365/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5365/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5365/events
|
https://github.com/huggingface/datasets/pull/5365
| 1,498,422,466
|
PR_kwDODunzps5Fi6ZD
| 5,365
|
fix: image array should support other formats than uint8
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/30353?v=4",
"events_url": "https://api.github.com/users/vigsterkr/events{/privacy}",
"followers_url": "https://api.github.com/users/vigsterkr/followers",
"following_url": "https://api.github.com/users/vigsterkr/following{/other_user}",
"gists_url": "https://api.github.com/users/vigsterkr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vigsterkr",
"id": 30353,
"login": "vigsterkr",
"node_id": "MDQ6VXNlcjMwMzUz",
"organizations_url": "https://api.github.com/users/vigsterkr/orgs",
"received_events_url": "https://api.github.com/users/vigsterkr/received_events",
"repos_url": "https://api.github.com/users/vigsterkr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vigsterkr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vigsterkr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vigsterkr",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi, thanks for working on this! \r\n\r\nI agree that the current type-casting (always cast to `np.uint8` as Tensorflow Datasets does) is a bit too harsh. However, not all dtypes are supported in `Image.fromarray` (e.g. np.int64), so we need to treat these with special care (e.g. downcast to the closest supported dtype, maybe with warnings to let the user know what's happening).\r\n\r\nPS: To avoid the CI failures, we need to handle two more instances of the cast to `np.uint8` (both are in the `image.py` file).",
"I've made some changes to the PR.\r\n\r\nNow the encoding procedure behaves as follows:\r\n* for multi-channel arrays: if their dtype is `int`/`uint`, cast to np.uint8 (the only supported dtype for multi-channel arrays), throw an error otherwise\r\n* if the array dtype is of valid kind (\"u\", \"i\", \"f\", ...):\r\n * don't do anything if Pillow natively supports it\r\n * otherwise, downcast until it becomes compatible with Pillow\r\n* raise an error if nothing from above is true",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009537 / 0.011353 (-0.001816) | 0.004946 / 0.011008 (-0.006062) | 0.100552 / 0.038508 (0.062043) | 0.035119 / 0.023109 (0.012009) | 0.295989 / 0.275898 (0.020091) | 0.361326 / 0.323480 (0.037846) | 0.007608 / 0.007986 (-0.000378) | 0.004151 / 0.004328 (-0.000177) | 0.077301 / 0.004250 (0.073050) | 0.042921 / 0.037052 (0.005869) | 0.304804 / 0.258489 (0.046315) | 0.345934 / 0.293841 (0.052093) | 0.038987 / 0.128546 (-0.089559) | 0.012055 / 0.075646 (-0.063591) | 0.334035 / 0.419271 (-0.085236) | 0.052679 / 0.043533 (0.009146) | 0.291700 / 0.255139 (0.036561) | 0.335423 / 0.283200 (0.052223) | 0.107002 / 0.141683 (-0.034680) | 1.516780 / 1.452155 (0.064625) | 1.514137 / 1.492716 (0.021420) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.014719 / 0.018006 (-0.003287) | 0.545251 / 0.000490 (0.544761) | 0.004719 / 0.000200 (0.004519) | 0.000275 / 0.000054 (0.000220) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026633 / 0.037411 (-0.010779) | 0.106911 / 0.014526 (0.092385) | 0.120258 / 0.176557 (-0.056299) | 0.156196 / 0.737135 (-0.580940) | 0.123132 / 0.296338 (-0.173207) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.398018 / 0.215209 (0.182809) | 3.973992 / 2.077655 (1.896337) | 1.776436 / 1.504120 (0.272316) | 1.579036 / 1.541195 (0.037841) | 1.643345 / 1.468490 (0.174855) | 0.692408 / 4.584777 (-3.892369) | 3.757243 / 3.745712 (0.011531) | 3.226212 / 5.269862 (-2.043649) | 1.797845 / 4.565676 (-2.767831) | 0.085878 / 0.424275 (-0.338398) | 0.012451 / 0.007607 (0.004844) | 0.509755 / 0.226044 (0.283711) | 5.029035 / 2.268929 (2.760107) | 2.255507 / 55.444624 (-53.189117) | 1.892868 / 6.876477 (-4.983609) | 1.900017 / 2.142072 (-0.242055) | 0.853965 / 4.805227 (-3.951263) | 0.167268 / 6.500664 (-6.333396) | 0.062796 / 0.075469 (-0.012673) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.183361 / 1.841788 (-0.658427) | 15.103797 / 8.074308 (7.029489) | 14.112931 / 10.191392 (3.921539) | 0.167234 / 0.680424 (-0.513190) | 0.029487 / 0.534201 (-0.504713) | 0.444121 / 0.579283 (-0.135162) | 0.437821 / 0.434364 (0.003457) | 0.544900 / 0.540337 (0.004562) | 0.642142 / 1.386936 (-0.744794) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007078 / 0.011353 (-0.004275) | 0.004983 / 0.011008 (-0.006026) | 0.097106 / 0.038508 (0.058598) | 0.033747 / 0.023109 (0.010637) | 0.382030 / 0.275898 (0.106132) | 0.410193 / 0.323480 (0.086713) | 0.006658 / 0.007986 (-0.001327) | 0.005358 / 0.004328 (0.001029) | 0.073878 / 0.004250 (0.069628) | 0.049292 / 0.037052 (0.012240) | 0.384053 / 0.258489 (0.125564) | 0.427826 / 0.293841 (0.133985) | 0.036780 / 0.128546 (-0.091766) | 0.012469 / 0.075646 (-0.063178) | 0.332989 / 0.419271 (-0.086283) | 0.059531 / 0.043533 (0.015998) | 0.378431 / 0.255139 (0.123292) | 0.402672 / 0.283200 (0.119473) | 0.110782 / 0.141683 (-0.030901) | 1.484570 / 1.452155 (0.032416) | 1.608081 / 1.492716 (0.115365) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.232356 / 0.018006 (0.214350) | 0.545648 / 0.000490 (0.545158) | 0.003113 / 0.000200 (0.002913) | 0.000089 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028138 / 0.037411 (-0.009273) | 0.110786 / 0.014526 (0.096260) | 0.123615 / 0.176557 (-0.052941) | 0.165773 / 0.737135 (-0.571362) | 0.126401 / 0.296338 (-0.169937) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440518 / 0.215209 (0.225309) | 4.393821 / 2.077655 (2.316166) | 2.295479 / 1.504120 (0.791359) | 2.116679 / 1.541195 (0.575485) | 2.215561 / 1.468490 (0.747071) | 0.722343 / 4.584777 (-3.862434) | 3.783360 / 3.745712 (0.037647) | 3.302242 / 5.269862 (-1.967620) | 1.681535 / 4.565676 (-2.884142) | 0.085738 / 0.424275 (-0.338537) | 0.012373 / 0.007607 (0.004766) | 0.540499 / 0.226044 (0.314455) | 5.384915 / 2.268929 (3.115986) | 2.766346 / 55.444624 (-52.678279) | 2.451994 / 6.876477 (-4.424483) | 2.505720 / 2.142072 (0.363647) | 0.833006 / 4.805227 (-3.972221) | 0.168206 / 6.500664 (-6.332458) | 0.064971 / 0.075469 (-0.010498) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.253499 / 1.841788 (-0.588289) | 15.381840 / 8.074308 (7.307532) | 13.519493 / 10.191392 (3.328101) | 0.165559 / 0.680424 (-0.514865) | 0.017682 / 0.534201 (-0.516519) | 0.422248 / 0.579283 (-0.157035) | 0.422750 / 0.434364 (-0.011614) | 0.524546 / 0.540337 (-0.015792) | 0.626956 / 1.386936 (-0.759980) |\n\n</details>\n</details>\n\n\n"
] | 2022-12-15T13:17:50Z
| 2023-01-26T18:46:45Z
| 2023-01-26T18:39:36Z
|
CONTRIBUTOR
| null | null | null |
Currently images that are provided as ndarrays, but not in `uint8` format are going to loose data. Namely, for example in a depth image where the data is in float32 format, the type-casting to uint8 will basically make the whole image blank.
`PIL.Image.fromarray` [does support mode `F`](https://pillow.readthedocs.io/en/stable/handbook/concepts.html#concept-modes).
although maybe some further metadata could be supplied via the [Image](https://huggingface.co/docs/datasets/v2.7.1/en/package_reference/main_classes#datasets.Image) object.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5365/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5365/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5365.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5365",
"merged_at": "2023-01-26T18:39:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5365.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5365"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6129
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6129/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6129/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6129/events
|
https://github.com/huggingface/datasets/pull/6129
| 1,841,563,517
|
PR_kwDODunzps5Xcmuw
| 6,129
|
Release 2.14.4
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006053 / 0.011353 (-0.005299) | 0.003532 / 0.011008 (-0.007476) | 0.081930 / 0.038508 (0.043422) | 0.059043 / 0.023109 (0.035934) | 0.322785 / 0.275898 (0.046887) | 0.378158 / 0.323480 (0.054678) | 0.004709 / 0.007986 (-0.003277) | 0.002907 / 0.004328 (-0.001421) | 0.061516 / 0.004250 (0.057266) | 0.047209 / 0.037052 (0.010157) | 0.346885 / 0.258489 (0.088396) | 0.381011 / 0.293841 (0.087170) | 0.027491 / 0.128546 (-0.101055) | 0.008014 / 0.075646 (-0.067632) | 0.260663 / 0.419271 (-0.158608) | 0.045427 / 0.043533 (0.001894) | 0.315277 / 0.255139 (0.060138) | 0.377902 / 0.283200 (0.094703) | 0.021371 / 0.141683 (-0.120311) | 1.416350 / 1.452155 (-0.035804) | 1.483345 / 1.492716 (-0.009372) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203660 / 0.018006 (0.185654) | 0.569081 / 0.000490 (0.568591) | 0.002742 / 0.000200 (0.002542) | 0.000074 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023456 / 0.037411 (-0.013955) | 0.073954 / 0.014526 (0.059428) | 0.082991 / 0.176557 (-0.093566) | 0.144781 / 0.737135 (-0.592354) | 0.083346 / 0.296338 (-0.212992) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.391542 / 0.215209 (0.176333) | 3.909505 / 2.077655 (1.831850) | 1.862234 / 1.504120 (0.358114) | 1.676076 / 1.541195 (0.134881) | 1.727595 / 1.468490 (0.259105) | 0.501769 / 4.584777 (-4.083008) | 3.083697 / 3.745712 (-0.662016) | 2.819751 / 5.269862 (-2.450111) | 1.867265 / 4.565676 (-2.698411) | 0.057575 / 0.424275 (-0.366700) | 0.006478 / 0.007607 (-0.001129) | 0.466684 / 0.226044 (0.240640) | 4.657982 / 2.268929 (2.389054) | 2.347052 / 55.444624 (-53.097573) | 1.964688 / 6.876477 (-4.911789) | 2.077821 / 2.142072 (-0.064252) | 0.590591 / 4.805227 (-4.214636) | 0.124585 / 6.500664 (-6.376079) | 0.059468 / 0.075469 (-0.016001) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.223484 / 1.841788 (-0.618304) | 18.104638 / 8.074308 (10.030330) | 13.755126 / 10.191392 (3.563734) | 0.143158 / 0.680424 (-0.537266) | 0.017147 / 0.534201 (-0.517054) | 0.337427 / 0.579283 (-0.241856) | 0.352270 / 0.434364 (-0.082094) | 0.383718 / 0.540337 (-0.156619) | 0.534973 / 1.386936 (-0.851963) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006039 / 0.011353 (-0.005314) | 0.003735 / 0.011008 (-0.007274) | 0.061954 / 0.038508 (0.023446) | 0.061786 / 0.023109 (0.038677) | 0.429420 / 0.275898 (0.153522) | 0.457629 / 0.323480 (0.134149) | 0.004748 / 0.007986 (-0.003237) | 0.002843 / 0.004328 (-0.001485) | 0.061811 / 0.004250 (0.057560) | 0.048740 / 0.037052 (0.011687) | 0.430066 / 0.258489 (0.171577) | 0.465971 / 0.293841 (0.172130) | 0.027577 / 0.128546 (-0.100969) | 0.007981 / 0.075646 (-0.067665) | 0.067580 / 0.419271 (-0.351692) | 0.042058 / 0.043533 (-0.001475) | 0.428412 / 0.255139 (0.173273) | 0.451054 / 0.283200 (0.167855) | 0.020850 / 0.141683 (-0.120833) | 1.453907 / 1.452155 (0.001752) | 1.509914 / 1.492716 (0.017197) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237713 / 0.018006 (0.219707) | 0.418064 / 0.000490 (0.417575) | 0.006411 / 0.000200 (0.006211) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024950 / 0.037411 (-0.012462) | 0.076806 / 0.014526 (0.062281) | 0.085237 / 0.176557 (-0.091320) | 0.137940 / 0.737135 (-0.599196) | 0.086266 / 0.296338 (-0.210072) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418666 / 0.215209 (0.203457) | 4.160547 / 2.077655 (2.082893) | 2.135671 / 1.504120 (0.631551) | 1.964985 / 1.541195 (0.423790) | 2.009447 / 1.468490 (0.540957) | 0.501377 / 4.584777 (-4.083400) | 3.064293 / 3.745712 (-0.681419) | 2.827153 / 5.269862 (-2.442709) | 1.854698 / 4.565676 (-2.710978) | 0.057662 / 0.424275 (-0.366613) | 0.006829 / 0.007607 (-0.000778) | 0.496730 / 0.226044 (0.270686) | 4.964663 / 2.268929 (2.695735) | 2.583133 / 55.444624 (-52.861491) | 2.329700 / 6.876477 (-4.546776) | 2.415521 / 2.142072 (0.273449) | 0.591973 / 4.805227 (-4.213255) | 0.126801 / 6.500664 (-6.373863) | 0.062811 / 0.075469 (-0.012659) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.348575 / 1.841788 (-0.493212) | 18.282861 / 8.074308 (10.208553) | 13.734056 / 10.191392 (3.542664) | 0.154987 / 0.680424 (-0.525437) | 0.016996 / 0.534201 (-0.517205) | 0.335264 / 0.579283 (-0.244019) | 0.356907 / 0.434364 (-0.077456) | 0.399185 / 0.540337 (-0.141152) | 0.540209 / 1.386936 (-0.846727) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006768 / 0.011353 (-0.004585) | 0.004250 / 0.011008 (-0.006758) | 0.086780 / 0.038508 (0.048272) | 0.080872 / 0.023109 (0.057762) | 0.309281 / 0.275898 (0.033383) | 0.352293 / 0.323480 (0.028814) | 0.005604 / 0.007986 (-0.002382) | 0.003544 / 0.004328 (-0.000784) | 0.066910 / 0.004250 (0.062659) | 0.055568 / 0.037052 (0.018516) | 0.314931 / 0.258489 (0.056442) | 0.366026 / 0.293841 (0.072185) | 0.031247 / 0.128546 (-0.097300) | 0.008860 / 0.075646 (-0.066786) | 0.293210 / 0.419271 (-0.126061) | 0.052868 / 0.043533 (0.009335) | 0.316769 / 0.255139 (0.061630) | 0.352128 / 0.283200 (0.068929) | 0.025492 / 0.141683 (-0.116190) | 1.478379 / 1.452155 (0.026224) | 1.573652 / 1.492716 (0.080936) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.294975 / 0.018006 (0.276968) | 0.615093 / 0.000490 (0.614603) | 0.004279 / 0.000200 (0.004079) | 0.000102 / 0.000054 (0.000047) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031557 / 0.037411 (-0.005855) | 0.085026 / 0.014526 (0.070500) | 0.101221 / 0.176557 (-0.075336) | 0.157432 / 0.737135 (-0.579703) | 0.102350 / 0.296338 (-0.193988) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.384158 / 0.215209 (0.168949) | 3.826656 / 2.077655 (1.749001) | 1.873510 / 1.504120 (0.369390) | 1.721913 / 1.541195 (0.180718) | 1.848779 / 1.468490 (0.380289) | 0.485128 / 4.584777 (-4.099649) | 3.656660 / 3.745712 (-0.089052) | 3.441964 / 5.269862 (-1.827898) | 2.150611 / 4.565676 (-2.415066) | 0.056869 / 0.424275 (-0.367406) | 0.007382 / 0.007607 (-0.000225) | 0.458751 / 0.226044 (0.232707) | 4.585028 / 2.268929 (2.316099) | 2.439538 / 55.444624 (-53.005086) | 2.116959 / 6.876477 (-4.759518) | 2.459220 / 2.142072 (0.317147) | 0.580907 / 4.805227 (-4.224321) | 0.134502 / 6.500664 (-6.366162) | 0.062528 / 0.075469 (-0.012941) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.251006 / 1.841788 (-0.590782) | 20.755849 / 8.074308 (12.681541) | 14.456950 / 10.191392 (4.265558) | 0.167074 / 0.680424 (-0.513350) | 0.018482 / 0.534201 (-0.515719) | 0.395867 / 0.579283 (-0.183416) | 0.415620 / 0.434364 (-0.018744) | 0.462247 / 0.540337 (-0.078090) | 0.645762 / 1.386936 (-0.741174) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007050 / 0.011353 (-0.004303) | 0.004421 / 0.011008 (-0.006587) | 0.065312 / 0.038508 (0.026804) | 0.089790 / 0.023109 (0.066681) | 0.366318 / 0.275898 (0.090420) | 0.403542 / 0.323480 (0.080062) | 0.005695 / 0.007986 (-0.002290) | 0.003642 / 0.004328 (-0.000687) | 0.064540 / 0.004250 (0.060289) | 0.060933 / 0.037052 (0.023881) | 0.369004 / 0.258489 (0.110515) | 0.408056 / 0.293841 (0.114215) | 0.032124 / 0.128546 (-0.096422) | 0.008960 / 0.075646 (-0.066686) | 0.071267 / 0.419271 (-0.348005) | 0.049745 / 0.043533 (0.006212) | 0.367203 / 0.255139 (0.112064) | 0.383009 / 0.283200 (0.099809) | 0.025330 / 0.141683 (-0.116353) | 1.518290 / 1.452155 (0.066135) | 1.581738 / 1.492716 (0.089022) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.338281 / 0.018006 (0.320275) | 0.538195 / 0.000490 (0.537706) | 0.008498 / 0.000200 (0.008298) | 0.000121 / 0.000054 (0.000067) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033279 / 0.037411 (-0.004133) | 0.093233 / 0.014526 (0.078707) | 0.106019 / 0.176557 (-0.070538) | 0.161262 / 0.737135 (-0.575874) | 0.109935 / 0.296338 (-0.186404) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.411563 / 0.215209 (0.196354) | 4.102149 / 2.077655 (2.024495) | 2.108513 / 1.504120 (0.604393) | 1.945344 / 1.541195 (0.404150) | 2.066964 / 1.468490 (0.598474) | 0.482771 / 4.584777 (-4.102006) | 3.659160 / 3.745712 (-0.086552) | 3.420833 / 5.269862 (-1.849029) | 2.147276 / 4.565676 (-2.418400) | 0.056957 / 0.424275 (-0.367318) | 0.007898 / 0.007607 (0.000290) | 0.482401 / 0.226044 (0.256357) | 4.821044 / 2.268929 (2.552115) | 2.567993 / 55.444624 (-52.876631) | 2.336165 / 6.876477 (-4.540312) | 2.545066 / 2.142072 (0.402994) | 0.580888 / 4.805227 (-4.224339) | 0.134092 / 6.500664 (-6.366572) | 0.062681 / 0.075469 (-0.012788) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.379124 / 1.841788 (-0.462664) | 21.627949 / 8.074308 (13.553641) | 15.064818 / 10.191392 (4.873426) | 0.169707 / 0.680424 (-0.510716) | 0.018671 / 0.534201 (-0.515530) | 0.400496 / 0.579283 (-0.178787) | 0.415542 / 0.434364 (-0.018822) | 0.484351 / 0.540337 (-0.055986) | 0.646046 / 1.386936 (-0.740890) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007113 / 0.011353 (-0.004240) | 0.004436 / 0.011008 (-0.006572) | 0.087422 / 0.038508 (0.048914) | 0.085996 / 0.023109 (0.062887) | 0.311772 / 0.275898 (0.035873) | 0.353281 / 0.323480 (0.029801) | 0.004562 / 0.007986 (-0.003423) | 0.003840 / 0.004328 (-0.000488) | 0.066500 / 0.004250 (0.062250) | 0.061293 / 0.037052 (0.024241) | 0.328840 / 0.258489 (0.070351) | 0.365587 / 0.293841 (0.071746) | 0.031802 / 0.128546 (-0.096744) | 0.008881 / 0.075646 (-0.066765) | 0.289671 / 0.419271 (-0.129601) | 0.053348 / 0.043533 (0.009816) | 0.307822 / 0.255139 (0.052683) | 0.342559 / 0.283200 (0.059360) | 0.025760 / 0.141683 (-0.115923) | 1.509944 / 1.452155 (0.057789) | 1.556634 / 1.492716 (0.063918) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.282036 / 0.018006 (0.264029) | 0.608350 / 0.000490 (0.607860) | 0.004843 / 0.000200 (0.004643) | 0.000108 / 0.000054 (0.000054) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029810 / 0.037411 (-0.007601) | 0.086215 / 0.014526 (0.071689) | 0.102200 / 0.176557 (-0.074356) | 0.158051 / 0.737135 (-0.579084) | 0.103083 / 0.296338 (-0.193255) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.392119 / 0.215209 (0.176910) | 3.895796 / 2.077655 (1.818141) | 1.921118 / 1.504120 (0.416998) | 1.754271 / 1.541195 (0.213076) | 1.880991 / 1.468490 (0.412501) | 0.481158 / 4.584777 (-4.103618) | 3.609210 / 3.745712 (-0.136502) | 3.412018 / 5.269862 (-1.857843) | 2.131710 / 4.565676 (-2.433967) | 0.057122 / 0.424275 (-0.367153) | 0.007444 / 0.007607 (-0.000163) | 0.468880 / 0.226044 (0.242835) | 4.682441 / 2.268929 (2.413512) | 2.505613 / 55.444624 (-52.939012) | 2.149655 / 6.876477 (-4.726822) | 2.465904 / 2.142072 (0.323832) | 0.578877 / 4.805227 (-4.226350) | 0.133504 / 6.500664 (-6.367160) | 0.061422 / 0.075469 (-0.014047) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.269395 / 1.841788 (-0.572393) | 21.107558 / 8.074308 (13.033250) | 15.318502 / 10.191392 (5.127110) | 0.165273 / 0.680424 (-0.515151) | 0.018783 / 0.534201 (-0.515418) | 0.396259 / 0.579283 (-0.183024) | 0.412907 / 0.434364 (-0.021457) | 0.465723 / 0.540337 (-0.074615) | 0.638414 / 1.386936 (-0.748522) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007083 / 0.011353 (-0.004270) | 0.004216 / 0.011008 (-0.006793) | 0.065362 / 0.038508 (0.026854) | 0.095454 / 0.023109 (0.072345) | 0.364220 / 0.275898 (0.088322) | 0.417650 / 0.323480 (0.094170) | 0.006114 / 0.007986 (-0.001872) | 0.003577 / 0.004328 (-0.000751) | 0.064830 / 0.004250 (0.060579) | 0.062535 / 0.037052 (0.025483) | 0.381844 / 0.258489 (0.123355) | 0.418996 / 0.293841 (0.125155) | 0.031386 / 0.128546 (-0.097160) | 0.008913 / 0.075646 (-0.066733) | 0.070860 / 0.419271 (-0.348411) | 0.049132 / 0.043533 (0.005599) | 0.360406 / 0.255139 (0.105267) | 0.392407 / 0.283200 (0.109207) | 0.024611 / 0.141683 (-0.117072) | 1.509051 / 1.452155 (0.056896) | 1.570288 / 1.492716 (0.077572) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.368611 / 0.018006 (0.350605) | 0.537587 / 0.000490 (0.537098) | 0.028056 / 0.000200 (0.027856) | 0.000317 / 0.000054 (0.000262) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031570 / 0.037411 (-0.005841) | 0.088985 / 0.014526 (0.074460) | 0.105268 / 0.176557 (-0.071288) | 0.156724 / 0.737135 (-0.580412) | 0.105266 / 0.296338 (-0.191073) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.413861 / 0.215209 (0.198652) | 4.127001 / 2.077655 (2.049347) | 2.112114 / 1.504120 (0.607994) | 1.945200 / 1.541195 (0.404005) | 2.083031 / 1.468490 (0.614540) | 0.488086 / 4.584777 (-4.096691) | 3.565584 / 3.745712 (-0.180128) | 3.380782 / 5.269862 (-1.889079) | 2.103481 / 4.565676 (-2.462195) | 0.058203 / 0.424275 (-0.366072) | 0.007996 / 0.007607 (0.000389) | 0.487986 / 0.226044 (0.261941) | 4.871023 / 2.268929 (2.602095) | 2.584632 / 55.444624 (-52.859992) | 2.240103 / 6.876477 (-4.636374) | 2.555165 / 2.142072 (0.413092) | 0.591950 / 4.805227 (-4.213278) | 0.134919 / 6.500664 (-6.365745) | 0.062868 / 0.075469 (-0.012601) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.369731 / 1.841788 (-0.472057) | 21.497888 / 8.074308 (13.423580) | 14.555054 / 10.191392 (4.363662) | 0.168768 / 0.680424 (-0.511656) | 0.018837 / 0.534201 (-0.515364) | 0.394512 / 0.579283 (-0.184771) | 0.405459 / 0.434364 (-0.028905) | 0.475479 / 0.540337 (-0.064858) | 0.631994 / 1.386936 (-0.754942) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009072 / 0.011353 (-0.002280) | 0.004894 / 0.011008 (-0.006114) | 0.108790 / 0.038508 (0.070282) | 0.081783 / 0.023109 (0.058674) | 0.381963 / 0.275898 (0.106064) | 0.450700 / 0.323480 (0.127220) | 0.006961 / 0.007986 (-0.001025) | 0.004035 / 0.004328 (-0.000293) | 0.081420 / 0.004250 (0.077169) | 0.058029 / 0.037052 (0.020976) | 0.437453 / 0.258489 (0.178964) | 0.472607 / 0.293841 (0.178766) | 0.048663 / 0.128546 (-0.079884) | 0.013512 / 0.075646 (-0.062134) | 0.406009 / 0.419271 (-0.013262) | 0.067616 / 0.043533 (0.024084) | 0.383641 / 0.255139 (0.128502) | 0.456734 / 0.283200 (0.173534) | 0.033391 / 0.141683 (-0.108292) | 1.753529 / 1.452155 (0.301375) | 1.859831 / 1.492716 (0.367115) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.215128 / 0.018006 (0.197122) | 0.538261 / 0.000490 (0.537771) | 0.005430 / 0.000200 (0.005230) | 0.000124 / 0.000054 (0.000069) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032664 / 0.037411 (-0.004748) | 0.093465 / 0.014526 (0.078939) | 0.106637 / 0.176557 (-0.069919) | 0.173642 / 0.737135 (-0.563494) | 0.113944 / 0.296338 (-0.182394) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.629212 / 0.215209 (0.414003) | 6.116729 / 2.077655 (4.039075) | 2.818000 / 1.504120 (1.313880) | 2.515317 / 1.541195 (0.974122) | 2.466588 / 1.468490 (0.998098) | 0.850815 / 4.584777 (-3.733962) | 5.051292 / 3.745712 (1.305579) | 4.472138 / 5.269862 (-0.797724) | 2.968317 / 4.565676 (-1.597360) | 0.100173 / 0.424275 (-0.324102) | 0.008407 / 0.007607 (0.000800) | 0.743972 / 0.226044 (0.517928) | 7.397619 / 2.268929 (5.128690) | 3.596681 / 55.444624 (-51.847943) | 2.854674 / 6.876477 (-4.021803) | 3.114274 / 2.142072 (0.972201) | 1.064879 / 4.805227 (-3.740348) | 0.215981 / 6.500664 (-6.284683) | 0.078159 / 0.075469 (0.002690) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.543291 / 1.841788 (-0.298497) | 23.244641 / 8.074308 (15.170333) | 20.784610 / 10.191392 (10.593218) | 0.222002 / 0.680424 (-0.458422) | 0.028584 / 0.534201 (-0.505617) | 0.478563 / 0.579283 (-0.100720) | 0.556101 / 0.434364 (0.121737) | 0.547446 / 0.540337 (0.007109) | 0.764318 / 1.386936 (-0.622618) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008651 / 0.011353 (-0.002702) | 0.004925 / 0.011008 (-0.006083) | 0.078995 / 0.038508 (0.040487) | 0.092878 / 0.023109 (0.069769) | 0.485615 / 0.275898 (0.209717) | 0.532157 / 0.323480 (0.208677) | 0.008228 / 0.007986 (0.000243) | 0.004777 / 0.004328 (0.000449) | 0.076892 / 0.004250 (0.072642) | 0.066905 / 0.037052 (0.029853) | 0.465497 / 0.258489 (0.207008) | 0.520153 / 0.293841 (0.226312) | 0.047357 / 0.128546 (-0.081189) | 0.016870 / 0.075646 (-0.058776) | 0.090481 / 0.419271 (-0.328791) | 0.060774 / 0.043533 (0.017241) | 0.474368 / 0.255139 (0.219229) | 0.503981 / 0.283200 (0.220781) | 0.036025 / 0.141683 (-0.105658) | 1.769939 / 1.452155 (0.317784) | 1.851518 / 1.492716 (0.358802) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.265947 / 0.018006 (0.247941) | 0.532317 / 0.000490 (0.531828) | 0.004997 / 0.000200 (0.004797) | 0.000130 / 0.000054 (0.000076) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034112 / 0.037411 (-0.003299) | 0.102290 / 0.014526 (0.087764) | 0.109989 / 0.176557 (-0.066567) | 0.182813 / 0.737135 (-0.554323) | 0.111774 / 0.296338 (-0.184565) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.584893 / 0.215209 (0.369684) | 6.138505 / 2.077655 (4.060850) | 2.925761 / 1.504120 (1.421641) | 2.607320 / 1.541195 (1.066125) | 2.655827 / 1.468490 (1.187337) | 0.871140 / 4.584777 (-3.713637) | 5.051171 / 3.745712 (1.305459) | 4.708008 / 5.269862 (-0.561854) | 3.027485 / 4.565676 (-1.538191) | 0.100970 / 0.424275 (-0.323305) | 0.009640 / 0.007607 (0.002033) | 0.747818 / 0.226044 (0.521774) | 7.539930 / 2.268929 (5.271001) | 3.611693 / 55.444624 (-51.832931) | 2.924087 / 6.876477 (-3.952390) | 3.141993 / 2.142072 (0.999920) | 1.062921 / 4.805227 (-3.742306) | 0.213185 / 6.500664 (-6.287479) | 0.077146 / 0.075469 (0.001677) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.669182 / 1.841788 (-0.172606) | 23.810242 / 8.074308 (15.735934) | 21.220649 / 10.191392 (11.029257) | 0.212639 / 0.680424 (-0.467785) | 0.026705 / 0.534201 (-0.507496) | 0.469231 / 0.579283 (-0.110053) | 0.551672 / 0.434364 (0.117308) | 0.575043 / 0.540337 (0.034706) | 0.767511 / 1.386936 (-0.619425) |\n\n</details>\n</details>\n\n\n"
] | 2023-08-08T15:43:56Z
| 2023-08-08T16:08:22Z
| 2023-08-08T15:49:06Z
|
MEMBER
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6129/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6129/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6129.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6129",
"merged_at": "2023-08-08T15:49:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6129.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6129"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6889
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6889/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6889/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6889/events
|
https://github.com/huggingface/datasets/pull/6889
| 2,287,720,539
|
PR_kwDODunzps5u_hW-
| 6,889
|
fix bug #6877
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16257131?v=4",
"events_url": "https://api.github.com/users/arthasking123/events{/privacy}",
"followers_url": "https://api.github.com/users/arthasking123/followers",
"following_url": "https://api.github.com/users/arthasking123/following{/other_user}",
"gists_url": "https://api.github.com/users/arthasking123/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/arthasking123",
"id": 16257131,
"login": "arthasking123",
"node_id": "MDQ6VXNlcjE2MjU3MTMx",
"organizations_url": "https://api.github.com/users/arthasking123/orgs",
"received_events_url": "https://api.github.com/users/arthasking123/received_events",
"repos_url": "https://api.github.com/users/arthasking123/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/arthasking123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arthasking123/subscriptions",
"type": "User",
"url": "https://api.github.com/users/arthasking123",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"@loicmagne, @KennethEnevoldsen",
"Can you give more details on why this fix works ?",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6889). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"> Can you give more details on why this fix works ?\r\n\r\nIn order to locate this file handle problem, I defined a print_open_files_count() function using psutil library:\r\n```python\r\ndef print_open_files_count(markstr):\r\n pid = os.getpid()\r\n p = psutil.Process(pid)\r\n open_files = p.open_files()\r\n print(f\"{markstr}_Open files count: {len(open_files)}\")\r\n\r\n\r\n```\r\n\r\nand added this function as below:\r\n```python\r\n\r\nwith open(file, \"rb\") as f:\r\n print_open_files_count('Before')\r\n...\r\n...\r\n batch_idx += 1\r\nprint_open_files_count('After')\r\n```\r\nand the console output as below when loading the 'mteb/biblenlp-corpus-mmteb' dataset :\r\n```shell\r\nBefore_Open files count: 1\r\nAfter_Open files count: 1\r\nBefore_Open files count: 2\r\nAfter_Open files count: 2\r\nBefore_Open files count: 3\r\nAfter_Open files count: 3\r\n...\r\n```\r\nwhich indicated there was a file handle leakage in the dataset loading process. So I tried to close the file handle manually using os library and found it works although the core issue has not been found temporarily",
"I think it would be better to find the cause and have a cleaner fix, because while your suggested fix works for a simple case, it will lead to files that will stay open if there is an error during the dataset generation for example.\r\n\r\n\r\nBtw I was not able to reproduce locally (macbook pro m2) or on colab, so it might be something related to your environment. Also `open()` should close the file at the end of the `with` block so I don't really get how you can get this issue :/",
"> Btw I was not able to reproduce locally (macbook pro m2) or on colab, so it might be something related to your environment. Also `open()` should close the file at the end of the `with` block so I don't really get how you can get this issue :/\r\n\r\nhow about setting the limitation of open files to 1024?",
"I was able to reproduce on colab with\r\n\r\n```\r\n!ulimit -n 256 && python -c \"from datasets import load_dataset; load_dataset('mteb/biblenlp-corpus-mmteb')\"\r\n```\r\n\r\n(also needed to `!pip install -qq git+https://github.com/huggingface/huggingface_hub.git@less-paths-info-calls` to fix a rate limit for some reason)\r\n\r\nwhich led to me find that the issue came from the `GzipFileSystem` that wasn't closing files.\r\n\r\nto reproduce:\r\n\r\n```python\r\nimport gzip\r\nimport os\r\n\r\nimport datasets\r\nimport fsspec\r\n\r\n# os.mkdir(\"tmp\")\r\n# for i in range(300):\r\n# with gzip.open(f\"tmp/{i}.txt.gz\", \"wt\") as f:\r\n# f.write(\"yo\")\r\n\r\nfor i in range(300):\r\n with fsspec.open(f\"gzip://{i}.txt::tmp/{i}.txt.gz\", \"rb\") as f:\r\n f.read()\r\n```\r\n\r\nI opened https://github.com/huggingface/datasets/pull/6893 to fix this, can you try if it works on your side ?",
"ok\n\n\n\n---- Replied Message ----\n| From | Quentin ***@***.***> |\n| Date | 05/13/2024 20:28 |\n| To | ***@***.***> |\n| Cc | ***@***.***>***@***.***> |\n| Subject | Re: [huggingface/datasets] fix bug #6877 (PR #6889) |\n\nI was able to reproduce on colab with\n\n!ulimit -n 256 && python -c \"from datasets import load_dataset; load_dataset('mteb/biblenlp-corpus-mmteb')\"\n\n\n(also needed to !pip install -qq ***@***.*** to fix a rate limit for some reason)\n\nwhich lead to me find that the issue came from the GzipFileSystem that wasn't closing files.\n\nto reproduce:\n\nimportgzipimportosimportdatasetsimportfsspec# os.mkdir(\"tmp\")# for i in range(300):# with gzip.open(f\"tmp/{i}.txt.gz\", \"wt\") as f:# f.write(\"yo\")foriinrange(300):\n withfsspec.open(f\"gzip://::tmp/{i}.txt.gz\", \"rb\") asf:\n f.read()\n\nI opened #6893 to fix this, can you try if it works on your side ?\n\n—\nReply to this email directly, view it on GitHub, or unsubscribe.\nYou are receiving this because you authored the thread.Message ID: ***@***.***>",
"Superseded by:\r\n- #6893"
] | 2024-05-09T13:38:40Z
| 2024-05-13T13:35:32Z
| 2024-05-13T13:35:32Z
|
NONE
| null | null | null |
fix bug #6877 due to maybe f becomes invaild after yield process
the results are below:
Resolving data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 828/828 [00:01<00:00, 420.41it/s]
Resolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 828/828 [00:00<00:00, 26148.48it/s]
Resolving data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 828/828 [00:00<00:00, 409731.44it/s]
Resolving data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 828/828 [00:00<00:00, 289720.84it/s]
Resolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 828/828 [00:00<00:00, 26663.42it/s]
Resolving data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 828/828 [00:00<00:00, 434056.21it/s]
Downloading data: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 828/828 [00:00<00:00, 13222.33files/s]
Downloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 828/828 [00:04<00:00, 180.67files/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 828/828 [01:35<00:00, 8.70files/s]
Generating train split: 1571592 examples [00:08, 176736.09 examples/s]
Generating test split: 85533 examples [00:01, 48224.56 examples/s]
Generating validation split: 86246 examples [00:01, 50164.16 examples/s]
Fix https://github.com/huggingface/datasets/issues/6877.
CC: @natolambert
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6889/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6889/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6889.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6889",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6889.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6889"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.