url
stringlengths 61
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 75
75
| comments_url
stringlengths 70
70
| events_url
stringlengths 68
68
| html_url
stringlengths 49
51
| id
int64 818M
2.44B
| node_id
stringlengths 18
32
| number
int64 1.96k
7.08k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| comments
sequencelengths 2
2
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 4
values | draft
bool 2
classes | pull_request
dict | body
stringlengths 0
36.2k
⌀ | reactions
dict | timeline_url
stringlengths 70
70
| state_reason
stringclasses 3
values | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/7082 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7082/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7082/comments | https://api.github.com/repos/huggingface/datasets/issues/7082/events | https://github.com/huggingface/datasets/pull/7082 | 2,437,354,975 | PR_kwDODunzps522dTJ | 7,082 | Support HTTP authentication in non-streaming mode | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2024-07-30T09:25:49 | 2024-07-30T09:59:11 | null | MEMBER | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7082",
"html_url": "https://github.com/huggingface/datasets/pull/7082",
"diff_url": "https://github.com/huggingface/datasets/pull/7082.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7082.patch",
"merged_at": null
} | Support HTTP authentication in non-streaming mode, by support passing HTTP storage_options in non-streaming mode.
- Note that currently, HTTP authentication is supported only in streaming mode.
For example, this is necessary if a remote HTTP host requires authentication to download the data. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7082/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7082/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7081 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7081/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7081/comments | https://api.github.com/repos/huggingface/datasets/issues/7081/events | https://github.com/huggingface/datasets/pull/7081 | 2,437,059,657 | PR_kwDODunzps521cGm | 7,081 | Set load_from_disk path type as PathLike | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2024-07-30T07:00:38 | 2024-07-30T08:30:37 | 2024-07-30T08:21:50 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7081",
"html_url": "https://github.com/huggingface/datasets/pull/7081",
"diff_url": "https://github.com/huggingface/datasets/pull/7081.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7081.patch",
"merged_at": "2024-07-30T08:21:50"
} | Set `load_from_disk` path type as `PathLike`. This way it is aligned with `save_to_disk`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7081/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7081/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7080 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7080/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7080/comments | https://api.github.com/repos/huggingface/datasets/issues/7080/events | https://github.com/huggingface/datasets/issues/7080 | 2,434,275,664 | I_kwDODunzps6RGBlQ | 7,080 | Generating train split takes a long time | {
"login": "alexanderswerdlow",
"id": 35648800,
"node_id": "MDQ6VXNlcjM1NjQ4ODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/35648800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexanderswerdlow",
"html_url": "https://github.com/alexanderswerdlow",
"followers_url": "https://api.github.com/users/alexanderswerdlow/followers",
"following_url": "https://api.github.com/users/alexanderswerdlow/following{/other_user}",
"gists_url": "https://api.github.com/users/alexanderswerdlow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexanderswerdlow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexanderswerdlow/subscriptions",
"organizations_url": "https://api.github.com/users/alexanderswerdlow/orgs",
"repos_url": "https://api.github.com/users/alexanderswerdlow/repos",
"events_url": "https://api.github.com/users/alexanderswerdlow/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexanderswerdlow/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2024-07-29T01:42:43 | 2024-07-29T01:42:43 | null | NONE | null | null | ### Describe the bug
Loading a simple webdataset takes ~45 minutes.
### Steps to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset("PixArt-alpha/SAM-LLaVA-Captions10M")
```
### Expected behavior
The dataset should load immediately as it does when loaded through a normal indexed WebDataset loader. Generating splits should be optional and there should be a message showing how to disable it.
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-4.18.0-372.32.1.el8_6.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.14
- `huggingface_hub` version: 0.24.1
- PyArrow version: 16.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7080/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7080/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7079 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7079/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7079/comments | https://api.github.com/repos/huggingface/datasets/issues/7079/events | https://github.com/huggingface/datasets/issues/7079 | 2,433,363,298 | I_kwDODunzps6RCi1i | 7,079 | HfHubHTTPError: 500 Server Error: Internal Server Error for url: | {
"login": "neoneye",
"id": 147971,
"node_id": "MDQ6VXNlcjE0Nzk3MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/147971?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/neoneye",
"html_url": "https://github.com/neoneye",
"followers_url": "https://api.github.com/users/neoneye/followers",
"following_url": "https://api.github.com/users/neoneye/following{/other_user}",
"gists_url": "https://api.github.com/users/neoneye/gists{/gist_id}",
"starred_url": "https://api.github.com/users/neoneye/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neoneye/subscriptions",
"organizations_url": "https://api.github.com/users/neoneye/orgs",
"repos_url": "https://api.github.com/users/neoneye/repos",
"events_url": "https://api.github.com/users/neoneye/events{/privacy}",
"received_events_url": "https://api.github.com/users/neoneye/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2024-07-27T08:21:03 | 2024-07-27T20:06:44 | 2024-07-27T19:52:30 | NONE | null | null | ### Describe the bug
newly uploaded datasets, since yesterday, yields an error.
old datasets, works fine.
Seems like the datasets api server returns a 500
I'm getting the same error, when I invoke `load_dataset` with my dataset.
Long discussion about it here, but I'm not sure anyone from huggingface have seen it.
https://discuss.huggingface.co/t/hfhubhttperror-500-server-error-internal-server-error-for-url/99580/1
### Steps to reproduce the bug
this api url:
https://huggingface.co/api/datasets/neoneye/simon-arc-shape-v4-rev3
respond with:
```
{"error":"Internal Error - We're working hard to fix this as soon as possible!"}
```
### Expected behavior
return no error with newer datasets.
With older datasets I can load the datasets fine.
### Environment info
# Browser
When I access the api in the browser:
https://huggingface.co/api/datasets/neoneye/simon-arc-shape-v4-rev3
```
{"error":"Internal Error - We're working hard to fix this as soon as possible!"}
```
### Request headers
```
Accept text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8
Accept-Encoding gzip, deflate, br, zstd
Accept-Language en-US,en;q=0.5
Connection keep-alive
Host huggingface.co
Priority u=1
Sec-Fetch-Dest document
Sec-Fetch-Mode navigate
Sec-Fetch-Site cross-site
Upgrade-Insecure-Requests 1
User-Agent Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:127.0) Gecko/20100101 Firefox/127.0
```
### Response headers
```
X-Firefox-Spdy h2
access-control-allow-origin https://huggingface.co
access-control-expose-headers X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,X-Total-Count,ETag,Link,Accept-Ranges,Content-Range
content-length 80
content-type application/json; charset=utf-8
cross-origin-opener-policy same-origin
date Fri, 26 Jul 2024 19:09:45 GMT
etag W/"50-9qrwU+BNI4SD0Fe32p/nofkmv0c"
referrer-policy strict-origin-when-cross-origin
vary Origin
via 1.1 1624c79cd07e6098196697a6a7907e4a.cloudfront.net (CloudFront)
x-amz-cf-id SP8E7n5qRaP6i9c9G83dNAiOzJBU4GXSrDRAcVNTomY895K35H0nJQ==
x-amz-cf-pop CPH50-C1
x-cache Error from cloudfront
x-error-message Internal Error - We're working hard to fix this as soon as possible!
x-powered-by huggingface-moon
x-request-id Root=1-66a3f479-026417465ef42f49349fdca1
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7079/reactions",
"total_count": 7,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 4
} | https://api.github.com/repos/huggingface/datasets/issues/7079/timeline | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/7078 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7078/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7078/comments | https://api.github.com/repos/huggingface/datasets/issues/7078/events | https://github.com/huggingface/datasets/pull/7078 | 2,433,270,271 | PR_kwDODunzps52oq4n | 7,078 | Fix CI test_convert_to_parquet | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2024-07-27T05:32:40 | 2024-07-27T05:50:57 | 2024-07-27T05:44:32 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7078",
"html_url": "https://github.com/huggingface/datasets/pull/7078",
"diff_url": "https://github.com/huggingface/datasets/pull/7078.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7078.patch",
"merged_at": "2024-07-27T05:44:32"
} | Fix `test_convert_to_parquet` by patching `HfApi.preupload_lfs_files` and revert temporary fix:
- #7074 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7078/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7078/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7077 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7077/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7077/comments | https://api.github.com/repos/huggingface/datasets/issues/7077/events | https://github.com/huggingface/datasets/issues/7077 | 2,432,345,489 | I_kwDODunzps6Q-qWR | 7,077 | column_names ignored by load_dataset() when loading CSV file | {
"login": "luismsgomes",
"id": 9130265,
"node_id": "MDQ6VXNlcjkxMzAyNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/9130265?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/luismsgomes",
"html_url": "https://github.com/luismsgomes",
"followers_url": "https://api.github.com/users/luismsgomes/followers",
"following_url": "https://api.github.com/users/luismsgomes/following{/other_user}",
"gists_url": "https://api.github.com/users/luismsgomes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/luismsgomes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/luismsgomes/subscriptions",
"organizations_url": "https://api.github.com/users/luismsgomes/orgs",
"repos_url": "https://api.github.com/users/luismsgomes/repos",
"events_url": "https://api.github.com/users/luismsgomes/events{/privacy}",
"received_events_url": "https://api.github.com/users/luismsgomes/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2024-07-26T14:18:04 | 2024-07-30T07:52:26 | null | NONE | null | null | ### Describe the bug
load_dataset() ignores the column_names kwarg when loading a CSV file. Instead, it uses whatever values are on the first line of the file.
### Steps to reproduce the bug
Call `load_dataset` to load data from a CSV file and specify `column_names` kwarg.
### Expected behavior
The resulting dataset should have the specified column names **and** the first line of the file should be considered as data values.
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-5.10.0-30-cloud-amd64-x86_64-with-glibc2.31
- Python version: 3.9.2
- `huggingface_hub` version: 0.24.2
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7077/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7077/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7076 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7076/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7076/comments | https://api.github.com/repos/huggingface/datasets/issues/7076/events | https://github.com/huggingface/datasets/pull/7076 | 2,432,275,393 | PR_kwDODunzps52lTDe | 7,076 | 🧪 Do not mock create_commit | {
"login": "coyotte508",
"id": 342922,
"node_id": "MDQ6VXNlcjM0MjkyMg==",
"avatar_url": "https://avatars.githubusercontent.com/u/342922?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/coyotte508",
"html_url": "https://github.com/coyotte508",
"followers_url": "https://api.github.com/users/coyotte508/followers",
"following_url": "https://api.github.com/users/coyotte508/following{/other_user}",
"gists_url": "https://api.github.com/users/coyotte508/gists{/gist_id}",
"starred_url": "https://api.github.com/users/coyotte508/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/coyotte508/subscriptions",
"organizations_url": "https://api.github.com/users/coyotte508/orgs",
"repos_url": "https://api.github.com/users/coyotte508/repos",
"events_url": "https://api.github.com/users/coyotte508/events{/privacy}",
"received_events_url": "https://api.github.com/users/coyotte508/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2024-07-26T13:44:42 | 2024-07-27T05:48:17 | 2024-07-27T05:48:17 | MEMBER | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7076",
"html_url": "https://github.com/huggingface/datasets/pull/7076",
"diff_url": "https://github.com/huggingface/datasets/pull/7076.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7076.patch",
"merged_at": null
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7076/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7076/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7075 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7075/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7075/comments | https://api.github.com/repos/huggingface/datasets/issues/7075/events | https://github.com/huggingface/datasets/pull/7075 | 2,432,027,412 | PR_kwDODunzps52kciD | 7,075 | Update required soxr version from pre-release to release | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2024-07-26T11:24:35 | 2024-07-26T11:46:52 | 2024-07-26T11:40:49 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7075",
"html_url": "https://github.com/huggingface/datasets/pull/7075",
"diff_url": "https://github.com/huggingface/datasets/pull/7075.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7075.patch",
"merged_at": "2024-07-26T11:40:49"
} | Update required `soxr` version from pre-release to release 0.4.0: https://github.com/dofuuz/python-soxr/releases/tag/v0.4.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7075/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7075/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7074 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7074/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7074/comments | https://api.github.com/repos/huggingface/datasets/issues/7074/events | https://github.com/huggingface/datasets/pull/7074 | 2,431,772,703 | PR_kwDODunzps52jkw4 | 7,074 | Fix CI by temporarily marking test_convert_to_parquet as expected to fail | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2024-07-26T09:03:33 | 2024-07-26T09:23:33 | 2024-07-26T09:16:12 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7074",
"html_url": "https://github.com/huggingface/datasets/pull/7074",
"diff_url": "https://github.com/huggingface/datasets/pull/7074.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7074.patch",
"merged_at": "2024-07-26T09:16:12"
} | As a hotfix for CI, temporarily mark test_convert_to_parquet as expected to fail.
Fix #7073.
Revert once root cause is fixed. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7074/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7074/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7073 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7073/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7073/comments | https://api.github.com/repos/huggingface/datasets/issues/7073/events | https://github.com/huggingface/datasets/issues/7073 | 2,431,706,568 | I_kwDODunzps6Q8OXI | 7,073 | CI is broken for convert_to_parquet: Invalid rev id: refs/pr/1 404 error causes RevisionNotFoundError | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | [
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2024-07-26T08:27:41 | 2024-07-27T05:48:02 | 2024-07-26T09:16:13 | MEMBER | null | null | See: https://github.com/huggingface/datasets/actions/runs/10095313567/job/27915185756
```
FAILED tests/test_hub.py::test_convert_to_parquet - huggingface_hub.utils._errors.RevisionNotFoundError: 404 Client Error. (Request ID: Root=1-66a25839-31ce7b475e70e7db1e4d44c2;b0c8870f-d5ef-4bf2-a6ff-0191f3df0f64)
Revision Not Found for url: https://hub-ci.huggingface.co/api/datasets/__DUMMY_TRANSFORMERS_USER__/test-dataset-5188a8-17219154347516/preupload/refs%2Fpr%2F1.
Invalid rev id: refs/pr/1
```
```
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/datasets/hub.py:86: in convert_to_parquet
dataset.push_to_hub(
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/datasets/dataset_dict.py:1722: in push_to_hub
split_additions, uploaded_size, dataset_nbytes = self[split]._push_parquet_shards_to_hub(
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/datasets/arrow_dataset.py:5511: in _push_parquet_shards_to_hub
api.preupload_lfs_files(
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/huggingface_hub/hf_api.py:4231: in preupload_lfs_files
_fetch_upload_modes(
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py:118: in _inner_fn
return fn(*args, **kwargs)
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/huggingface_hub/_commit_api.py:507: in _fetch_upload_modes
hf_raise_for_status(resp)
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7073/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7073/timeline | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/7072 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7072/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7072/comments | https://api.github.com/repos/huggingface/datasets/issues/7072/events | https://github.com/huggingface/datasets/issues/7072 | 2,430,577,916 | I_kwDODunzps6Q36z8 | 7,072 | nm | {
"login": "brettdavies",
"id": 26392883,
"node_id": "MDQ6VXNlcjI2MzkyODgz",
"avatar_url": "https://avatars.githubusercontent.com/u/26392883?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brettdavies",
"html_url": "https://github.com/brettdavies",
"followers_url": "https://api.github.com/users/brettdavies/followers",
"following_url": "https://api.github.com/users/brettdavies/following{/other_user}",
"gists_url": "https://api.github.com/users/brettdavies/gists{/gist_id}",
"starred_url": "https://api.github.com/users/brettdavies/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brettdavies/subscriptions",
"organizations_url": "https://api.github.com/users/brettdavies/orgs",
"repos_url": "https://api.github.com/users/brettdavies/repos",
"events_url": "https://api.github.com/users/brettdavies/events{/privacy}",
"received_events_url": "https://api.github.com/users/brettdavies/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2024-07-25T17:03:24 | 2024-07-25T20:36:11 | 2024-07-25T20:36:11 | NONE | null | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7072/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7072/timeline | not_planned | false |
https://api.github.com/repos/huggingface/datasets/issues/7071 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7071/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7071/comments | https://api.github.com/repos/huggingface/datasets/issues/7071/events | https://github.com/huggingface/datasets/issues/7071 | 2,430,313,011 | I_kwDODunzps6Q26Iz | 7,071 | Filter hangs | {
"login": "lucienwalewski",
"id": 61711045,
"node_id": "MDQ6VXNlcjYxNzExMDQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/61711045?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lucienwalewski",
"html_url": "https://github.com/lucienwalewski",
"followers_url": "https://api.github.com/users/lucienwalewski/followers",
"following_url": "https://api.github.com/users/lucienwalewski/following{/other_user}",
"gists_url": "https://api.github.com/users/lucienwalewski/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lucienwalewski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lucienwalewski/subscriptions",
"organizations_url": "https://api.github.com/users/lucienwalewski/orgs",
"repos_url": "https://api.github.com/users/lucienwalewski/repos",
"events_url": "https://api.github.com/users/lucienwalewski/events{/privacy}",
"received_events_url": "https://api.github.com/users/lucienwalewski/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2024-07-25T15:29:05 | 2024-07-25T15:36:59 | null | NONE | null | null | ### Describe the bug
When trying to filter my custom dataset, the process hangs, regardless of the lambda function used. It appears to be an issue with the way the Images are being handled. The dataset in question is a preprocessed version of https://huggingface.co/datasets/danaaubakirova/patfig where notably, I have converted the data to the Parquet format.
### Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset('lcolonn/patfig', split='test')
ds_filtered = ds.filter(lambda row: row['cpc_class'] != 'Y')
```
Eventually I ctrl+C and I obtain this stack trace:
```
>>> ds_filtered = ds.filter(lambda row: row['cpc_class'] != 'Y')
Filter: 0%| | 0/998 [00:00<?, ? examples/s]Filter: 0%| | 0/998 [00:35<?, ? examples/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 567, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/fingerprint.py", line 482, in wrapper
out = func(dataset, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3714, in filter
indices = self.map(
^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 602, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 567, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3161, in map
for rank, done, content in Dataset._map_single(**dataset_kwargs):
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3552, in _map_single
batch = apply_function_on_filtered_inputs(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3421, in apply_function_on_filtered_inputs
processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 6478, in get_indices_from_mask_function
num_examples = len(batch[next(iter(batch.keys()))])
~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/formatting/formatting.py", line 273, in __getitem__
value = self.format(key)
^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/formatting/formatting.py", line 376, in format
return self.formatter.format_column(self.pa_table.select([key]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/formatting/formatting.py", line 443, in format_column
column = self.python_features_decoder.decode_column(column, pa_table.column_names[0])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/formatting/formatting.py", line 219, in decode_column
return self.features.decode_column(column, column_name) if self.features else column
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/features/features.py", line 2008, in decode_column
[decode_nested_example(self[column_name], value) if value is not None else None for value in column]
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/features/features.py", line 2008, in <listcomp>
[decode_nested_example(self[column_name], value) if value is not None else None for value in column]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/features/features.py", line 1351, in decode_nested_example
return schema.decode_example(obj, token_per_repo_id=token_per_repo_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/features/image.py", line 188, in decode_example
image.load() # to avoid "Too many open files" errors
^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/PIL/ImageFile.py", line 293, in load
n, err_code = decoder.decode(b)
^^^^^^^^^^^^^^^^^
KeyboardInterrupt
```
Warning! This can even seem to cause some computers to crash.
### Expected behavior
Should return the filtered dataset
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-6.5.0-41-generic-x86_64-with-glibc2.35
- Python version: 3.11.9
- `huggingface_hub` version: 0.24.0
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7071/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7071/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7070 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7070/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7070/comments | https://api.github.com/repos/huggingface/datasets/issues/7070/events | https://github.com/huggingface/datasets/issues/7070 | 2,430,285,235 | I_kwDODunzps6Q2zWz | 7,070 | how set_transform affects batch size? | {
"login": "VafaKnm",
"id": 103993288,
"node_id": "U_kgDOBjLPyA",
"avatar_url": "https://avatars.githubusercontent.com/u/103993288?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VafaKnm",
"html_url": "https://github.com/VafaKnm",
"followers_url": "https://api.github.com/users/VafaKnm/followers",
"following_url": "https://api.github.com/users/VafaKnm/following{/other_user}",
"gists_url": "https://api.github.com/users/VafaKnm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VafaKnm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VafaKnm/subscriptions",
"organizations_url": "https://api.github.com/users/VafaKnm/orgs",
"repos_url": "https://api.github.com/users/VafaKnm/repos",
"events_url": "https://api.github.com/users/VafaKnm/events{/privacy}",
"received_events_url": "https://api.github.com/users/VafaKnm/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2024-07-25T15:19:34 | 2024-07-25T15:19:34 | null | NONE | null | null | ### Describe the bug
I am trying to fine-tune w2v-bert for ASR task. Since my dataset is so big, I preferred to use the on-the-fly method with set_transform. So i change the preprocessing function to this:
```
def prepare_dataset(batch):
input_features = processor(batch["audio"], sampling_rate=16000).input_features[0]
input_length = len(input_features)
labels = processor.tokenizer(batch["text"], padding=False).input_ids
batch = {
"input_features": [input_features],
"input_length": [input_length],
"labels": [labels]
}
return batch
train_ds.set_transform(prepare_dataset)
val_ds.set_transform(prepare_dataset)
```
After this, I also had to change the DataCollatorCTCWithPadding class like this:
```
@dataclass
class DataCollatorCTCWithPadding:
processor: Wav2Vec2BertProcessor
padding: Union[bool, str] = True
def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:
# Separate input_features and labels
input_features = [{"input_features": feature["input_features"][0]} for feature in features]
labels = [feature["labels"][0] for feature in features]
# Pad input features
batch = self.processor.pad(
input_features,
padding=self.padding,
return_tensors="pt",
)
# Pad and process labels
label_features = self.processor.tokenizer.pad(
{"input_ids": labels},
padding=self.padding,
return_tensors="pt",
)
labels = label_features["input_ids"]
attention_mask = label_features["attention_mask"]
# Replace padding with -100 to ignore these tokens during loss calculation
labels = labels.masked_fill(attention_mask.ne(1), -100)
batch["labels"] = labels
return batch
```
But now a strange thing is happening, no matter how much I increase the batch size, the amount of V-RAM GPU usage does not change, while the number of total steps in the progress-bar (logging) changes. Is this normal or have I made a mistake?
### Steps to reproduce the bug
i can share my code if needed
### Expected behavior
Equal to the batch size value, the set_transform function is applied to the dataset and given to the model as a batch.
### Environment info
all updated versions | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7070/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7070/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7069 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7069/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7069/comments | https://api.github.com/repos/huggingface/datasets/issues/7069/events | https://github.com/huggingface/datasets/pull/7069 | 2,429,281,339 | PR_kwDODunzps52betB | 7,069 | Fix push_to_hub by not calling create_branch if branch exists | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2024-07-25T07:50:04 | 2024-07-30T10:56:57 | 2024-07-30T10:51:01 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7069",
"html_url": "https://github.com/huggingface/datasets/pull/7069",
"diff_url": "https://github.com/huggingface/datasets/pull/7069.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7069.patch",
"merged_at": "2024-07-30T10:51:01"
} | Fix push_to_hub by not calling create_branch if branch exists.
Note that currently create_branch raises a 403 Forbidden error even if all these conditions are met:
- exist_ok is passed
- the branch already exists
- the user does not have WRITE permission
Fix #7067.
Related issue:
- https://github.com/huggingface/huggingface_hub/issues/2419 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7069/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7069/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7068 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7068/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7068/comments | https://api.github.com/repos/huggingface/datasets/issues/7068/events | https://github.com/huggingface/datasets/pull/7068 | 2,426,657,434 | PR_kwDODunzps52SwXS | 7,068 | Fix prepare_single_hop_path_and_storage_options | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2024-07-24T05:52:34 | 2024-07-29T07:02:07 | 2024-07-29T06:56:15 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7068",
"html_url": "https://github.com/huggingface/datasets/pull/7068",
"diff_url": "https://github.com/huggingface/datasets/pull/7068.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7068.patch",
"merged_at": "2024-07-29T06:56:15"
} | Fix `_prepare_single_hop_path_and_storage_options`:
- Do not pass HF authentication headers and HF user-agent to non-HF HTTP URLs
- Do not overwrite passed `storage_options` nested values:
- Before, when passed
```DownloadConfig(storage_options={"https": {"client_kwargs": {"raise_for_status": True}}})```,
it was overwritten to
```{"https": {"client_kwargs": {"trust_env": True}}}```
- Now, the result combines both:
```{"https": {"client_kwargs": {"trust_env": True, "raise_for_status": True}}}``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7068/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7068/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7067 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7067/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7067/comments | https://api.github.com/repos/huggingface/datasets/issues/7067/events | https://github.com/huggingface/datasets/issues/7067 | 2,425,460,168 | I_kwDODunzps6QkZXI | 7,067 | Convert_to_parquet fails for datasets with multiple configs | {
"login": "HuangZhen02",
"id": 97585031,
"node_id": "U_kgDOBdEHhw",
"avatar_url": "https://avatars.githubusercontent.com/u/97585031?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HuangZhen02",
"html_url": "https://github.com/HuangZhen02",
"followers_url": "https://api.github.com/users/HuangZhen02/followers",
"following_url": "https://api.github.com/users/HuangZhen02/following{/other_user}",
"gists_url": "https://api.github.com/users/HuangZhen02/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HuangZhen02/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HuangZhen02/subscriptions",
"organizations_url": "https://api.github.com/users/HuangZhen02/orgs",
"repos_url": "https://api.github.com/users/HuangZhen02/repos",
"events_url": "https://api.github.com/users/HuangZhen02/events{/privacy}",
"received_events_url": "https://api.github.com/users/HuangZhen02/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | [
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2024-07-23T15:09:33 | 2024-07-30T10:51:02 | 2024-07-30T10:51:02 | NONE | null | null | If the dataset has multiple configs, when using the `datasets-cli convert_to_parquet` command to avoid issues with the data viewer caused by loading scripts, the conversion process only successfully converts the data corresponding to the first config. When it starts converting the second config, it throws an error:
```
Traceback (most recent call last):
File "/opt/anaconda3/envs/dl/bin/datasets-cli", line 8, in <module>
sys.exit(main())
File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/datasets/commands/datasets_cli.py", line 41, in main
service.run()
File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/datasets/commands/convert_to_parquet.py", line 83, in run
dataset.push_to_hub(
File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/datasets/dataset_dict.py", line 1713, in push_to_hub
api.create_branch(repo_id, branch=revision, token=token, repo_type="dataset", exist_ok=True)
File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 5503, in create_branch
hf_raise_for_status(response)
File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 358, in hf_raise_for_status
raise BadRequestError(message, response=response) from e
huggingface_hub.utils._errors.BadRequestError: (Request ID: Root=1-669fc665-7c2e80d75f4337496ee95402;731fcdc7-0950-4eec-99cf-ce047b8d003f)
Bad request:
Invalid reference for a branch: refs/pr/1
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7067/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7067/timeline | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/7066 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7066/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7066/comments | https://api.github.com/repos/huggingface/datasets/issues/7066/events | https://github.com/huggingface/datasets/issues/7066 | 2,425,125,160 | I_kwDODunzps6QjHko | 7,066 | One subset per file in repo ? | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2024-07-23T12:43:59 | 2024-07-23T12:43:59 | null | MEMBER | null | null | Right now we consider all the files of a dataset to be the same data, e.g.
```
single_subset_dataset/
├── train0.jsonl
├── train1.jsonl
└── train2.jsonl
```
but in cases like this, each file is actually a different subset of the dataset and should be loaded separately
```
many_subsets_dataset/
├── animals.jsonl
├── trees.jsonl
└── metadata.jsonl
```
It would be nice to detect those subsets automatically using a simple heuristic. For example we can group files together if their paths names are the same except some digits ? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7066/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7066/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7065 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7065/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7065/comments | https://api.github.com/repos/huggingface/datasets/issues/7065/events | https://github.com/huggingface/datasets/issues/7065 | 2,424,734,953 | I_kwDODunzps6QhoTp | 7,065 | Cannot get item after loading from disk and then converting to iterable. | {
"login": "happyTonakai",
"id": 21305646,
"node_id": "MDQ6VXNlcjIxMzA1NjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/21305646?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/happyTonakai",
"html_url": "https://github.com/happyTonakai",
"followers_url": "https://api.github.com/users/happyTonakai/followers",
"following_url": "https://api.github.com/users/happyTonakai/following{/other_user}",
"gists_url": "https://api.github.com/users/happyTonakai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/happyTonakai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/happyTonakai/subscriptions",
"organizations_url": "https://api.github.com/users/happyTonakai/orgs",
"repos_url": "https://api.github.com/users/happyTonakai/repos",
"events_url": "https://api.github.com/users/happyTonakai/events{/privacy}",
"received_events_url": "https://api.github.com/users/happyTonakai/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2024-07-23T09:37:56 | 2024-07-23T09:37:56 | null | NONE | null | null | ### Describe the bug
The dataset generated from local file works fine.
```py
root = "/home/data/train"
file_list1 = glob(os.path.join(root, "*part1.flac"))
file_list2 = glob(os.path.join(root, "*part2.flac"))
ds = (
Dataset.from_dict({"part1": file_list1, "part2": file_list2})
.cast_column("part1", Audio(sampling_rate=None, mono=False))
.cast_column("part2", Audio(sampling_rate=None, mono=False))
)
ids = ds.to_iterable_dataset(128)
ids = ids.shuffle(buffer_size=10000, seed=42)
dataloader = DataLoader(ids, num_workers=4, batch_size=8, persistent_workers=True)
for batch in dataloader:
break
```
But after saving it to disk and then loading it from disk, I cannot get data as expected.
```py
root = "/home/data/train"
file_list1 = glob(os.path.join(root, "*part1.flac"))
file_list2 = glob(os.path.join(root, "*part2.flac"))
ds = (
Dataset.from_dict({"part1": file_list1, "part2": file_list2})
.cast_column("part1", Audio(sampling_rate=None, mono=False))
.cast_column("part2", Audio(sampling_rate=None, mono=False))
)
ds.save_to_disk("./train")
ds = datasets.load_from_disk("./train")
ids = ds.to_iterable_dataset(128)
ids = ids.shuffle(buffer_size=10000, seed=42)
dataloader = DataLoader(ids, num_workers=4, batch_size=8, persistent_workers=True)
for batch in dataloader:
break
```
After a long time waiting, an error occurs:
```
Loading dataset from disk: 100%|█████████████████████████████████████████████████████████████████████████| 165/165 [00:00<00:00, 6422.18it/s]
Traceback (most recent call last):
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1133, in _try_get_data
data = self._data_queue.get(timeout=timeout)
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/multiprocessing/queues.py", line 113, in get
if not self._poll(timeout):
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/multiprocessing/connection.py", line 257, in poll
return self._poll(timeout)
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/multiprocessing/connection.py", line 424, in _poll
r = wait([self], timeout)
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/multiprocessing/connection.py", line 931, in wait
ready = selector.select(timeout)
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/selectors.py", line 416, in select
fd_event_list = self._selector.poll(timeout)
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler
_error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 3490529) is killed by signal: Killed.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py", line 39, in <module>
cli.main()
File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 430, in main
run()
File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 284, in run_file
runpy.run_path(target, run_name="__main__")
File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 321, in run_path
return _run_module_code(code, init_globals, run_name,
File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 135, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 124, in _run_code
exec(code, run_globals)
File "/home/hanzerui/workspace/NetEase/test/test_datasets.py", line 60, in <module>
for batch in dataloader:
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 631, in __next__
data = self._next_data()
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1329, in _next_data
idx, data = self._get_data()
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1295, in _get_data
success, data = self._try_get_data()
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1146, in _try_get_data
raise RuntimeError(f'DataLoader worker (pid(s) {pids_str}) exited unexpectedly') from e
RuntimeError: DataLoader worker (pid(s) 3490529) exited unexpectedly
```
It seems that streaming is not supported by `laod_from_disk`, so does that mean I cannot convert it to iterable?
### Steps to reproduce the bug
1. Create a `Dataset` from local files with `from_dict`
2. Save it to disk with `save_to_disk`
3. Load it from disk with `load_from_disk`
4. Convert to iterable with `to_iterable_dataset`
5. Loop the dataset
### Expected behavior
Get items faster than the original dataset generated from dict.
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-6.5.0-41-generic-x86_64-with-glibc2.35
- Python version: 3.10.14
- `huggingface_hub` version: 0.23.2
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7065/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7065/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7064 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7064/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7064/comments | https://api.github.com/repos/huggingface/datasets/issues/7064/events | https://github.com/huggingface/datasets/pull/7064 | 2,424,613,104 | PR_kwDODunzps52Lz2- | 7,064 | Add `batch` method to `Dataset` class | {
"login": "lappemic",
"id": 61876623,
"node_id": "MDQ6VXNlcjYxODc2NjIz",
"avatar_url": "https://avatars.githubusercontent.com/u/61876623?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lappemic",
"html_url": "https://github.com/lappemic",
"followers_url": "https://api.github.com/users/lappemic/followers",
"following_url": "https://api.github.com/users/lappemic/following{/other_user}",
"gists_url": "https://api.github.com/users/lappemic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lappemic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lappemic/subscriptions",
"organizations_url": "https://api.github.com/users/lappemic/orgs",
"repos_url": "https://api.github.com/users/lappemic/repos",
"events_url": "https://api.github.com/users/lappemic/events{/privacy}",
"received_events_url": "https://api.github.com/users/lappemic/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2024-07-23T08:40:43 | 2024-07-25T13:51:25 | 2024-07-25T13:45:20 | CONTRIBUTOR | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7064",
"html_url": "https://github.com/huggingface/datasets/pull/7064",
"diff_url": "https://github.com/huggingface/datasets/pull/7064.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7064.patch",
"merged_at": "2024-07-25T13:45:20"
} | This PR introduces a new `batch` method to the `Dataset` class, aligning its functionality with the `IterableDataset.batch()` method (implemented in #7054). The implementation uses as well the existing `map` method for efficient batching of examples.
Key changes:
- Add `batch` method to `Dataset` class in `arrow_dataset.py`
- Utilize `map` method for batching
Closes #7063
Once the approach is approved, i will create the tests and update the documentation. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7064/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7064/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7063 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7063/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7063/comments | https://api.github.com/repos/huggingface/datasets/issues/7063/events | https://github.com/huggingface/datasets/issues/7063 | 2,424,488,648 | I_kwDODunzps6QgsLI | 7,063 | Add `batch` method to `Dataset` | {
"login": "lappemic",
"id": 61876623,
"node_id": "MDQ6VXNlcjYxODc2NjIz",
"avatar_url": "https://avatars.githubusercontent.com/u/61876623?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lappemic",
"html_url": "https://github.com/lappemic",
"followers_url": "https://api.github.com/users/lappemic/followers",
"following_url": "https://api.github.com/users/lappemic/following{/other_user}",
"gists_url": "https://api.github.com/users/lappemic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lappemic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lappemic/subscriptions",
"organizations_url": "https://api.github.com/users/lappemic/orgs",
"repos_url": "https://api.github.com/users/lappemic/repos",
"events_url": "https://api.github.com/users/lappemic/events{/privacy}",
"received_events_url": "https://api.github.com/users/lappemic/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | [
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2024-07-23T07:36:59 | 2024-07-25T13:45:21 | 2024-07-25T13:45:21 | CONTRIBUTOR | null | null | ### Feature request
Add a `batch` method to the Dataset class, similar to the one recently implemented for `IterableDataset` in PR #7054.
### Motivation
A batched iteration speeds up data loading significantly (see e.g. #6279)
### Your contribution
I plan to open a PR to implement this. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7063/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7063/timeline | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/7062 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7062/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7062/comments | https://api.github.com/repos/huggingface/datasets/issues/7062/events | https://github.com/huggingface/datasets/pull/7062 | 2,424,467,484 | PR_kwDODunzps52LUPR | 7,062 | Avoid calling http_head for non-HTTP URLs | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2024-07-23T07:25:09 | 2024-07-23T14:28:27 | 2024-07-23T14:21:08 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7062",
"html_url": "https://github.com/huggingface/datasets/pull/7062",
"diff_url": "https://github.com/huggingface/datasets/pull/7062.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7062.patch",
"merged_at": "2024-07-23T14:21:08"
} | Avoid calling `http_head` for non-HTTP URLs, by adding and `else` statement.
Currently, it makes an unnecessary HTTP call (which adds latency) for non-HTTP protocols, like FTP, S3,...
I discovered this while working in an unrelated issue. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7062/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7062/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7061 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7061/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7061/comments | https://api.github.com/repos/huggingface/datasets/issues/7061/events | https://github.com/huggingface/datasets/issues/7061 | 2,423,786,881 | I_kwDODunzps6QeA2B | 7,061 | Custom Dataset | Still Raise Error while handling errors in _generate_examples | {
"login": "hahmad2008",
"id": 68266028,
"node_id": "MDQ6VXNlcjY4MjY2MDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/68266028?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hahmad2008",
"html_url": "https://github.com/hahmad2008",
"followers_url": "https://api.github.com/users/hahmad2008/followers",
"following_url": "https://api.github.com/users/hahmad2008/following{/other_user}",
"gists_url": "https://api.github.com/users/hahmad2008/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hahmad2008/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hahmad2008/subscriptions",
"organizations_url": "https://api.github.com/users/hahmad2008/orgs",
"repos_url": "https://api.github.com/users/hahmad2008/repos",
"events_url": "https://api.github.com/users/hahmad2008/events{/privacy}",
"received_events_url": "https://api.github.com/users/hahmad2008/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2024-07-22T21:18:12 | 2024-07-22T21:18:12 | null | NONE | null | null | ### Describe the bug
I follow this [example](https://discuss.huggingface.co/t/error-handling-in-iterabledataset/72827/3) to handle errors in custom dataset. I am writing a dataset script which read jsonl files and i need to handle errors and continue reading files without raising exception and exit the execution.
```
def _generate_examples(self, filepaths):
errors=[]
id_ = 0
for filepath in filepaths:
try:
with open(filepath, 'r') as f:
for line in f:
json_obj = json.loads(line)
yield id_, json_obj
id_ += 1
except Exception as exc:
logger.error(f"error occur at filepath: {filepath}")
errors.append(error)
```
seems the logger.error is printed but still exception is raised the the run is exit.
```
Downloading and preparing dataset custom_dataset/default to /home/myuser/.cache/huggingface/datasets/custom_dataset/default-a14cdd566afee0a6/1.0.0/acfcc9fb9c57034b580c4252841
ERROR: datasets_modules.datasets.custom_dataset.acfcc9fb9c57034b580c4252841bb890a5617cbd28678dd4be5e52b81188ad02.custom_dataset: 2024-07-22 10:47:42,167: error occur at filepath: '/home/myuser/ds/corrupted-file.jsonl
Traceback (most recent call last):
File "/home/myuser/.cache/huggingface/modules/datasets_modules/datasets/custom_dataset/ac..2/custom_dataset.py", line 48, in _generate_examples
json_obj = json.loads(line)
File "myenv/lib/python3.8/json/__init__.py", line 357, in loads
return _default_decoder.decode(s)
File "myenv/lib/python3.8/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "myenv/lib/python3.8/json/decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Invalid control character at: line 1 column 4 (char 3)
Generating train split: 0 examples [00:06, ? examples/s]>
RemoteTraceback:
"""
Traceback (most recent call last):
File "myenv/lib/python3.8/site-packages/datasets/builder.py", line 1637, in _prepare_split_single
num_examples, num_bytes = writer.finalize()
File "myenv/lib/python3.8/site-packages/datasets/arrow_writer.py", line 594, in finalize
raise SchemaInferenceError("Please pass `features` or at least one example when writing data")
datasets.arrow_writer.SchemaInferenceError: Please pass `features` or at least one example when writing data
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "myenv/lib/python3.8/site-packages/multiprocess/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "myenv/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 1353, in
_write_generator_to_queue
for i, result in enumerate(func(**kwargs)):
File "myenv/lib/python3.8/site-packages/datasets/builder.py", line 1646, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
"""
The above exception was the direct cause of the following exception:
│ │
│ myenv/lib/python3.8/site-packages/datasets/utils/py_utils. │
│ py:1377 in <listcomp> │
│ │
│ 1374 │ │ │ │ if all(async_result.ready() for async_result in async_results) and queue │
│ 1375 │ │ │ │ │ break │
│ 1376 │ │ # we get the result in case there's an error to raise │
│ ❱ 1377 │ │ [async_result.get() for async_result in async_results] │
│ 1378 │
│ │
│ ╭──────────────────────────────── locals ─────────────────────────────────╮ │
│ │ .0 = <list_iterator object at 0x7f2cc1f0ce20> │ │
│ │ async_result = <multiprocess.pool.ApplyResult object at 0x7f2cc1f79c10> │ │
│ ╰─────────────────────────────────────────────────────────────────────────╯ │
│ │
│ myenv/lib/python3.8/site-packages/multiprocess/pool.py:771 │
│ in get │
│ │
│ 768 │ │ if self._success: │
│ 769 │ │ │ return self._value │
│ 770 │ │ else: │
│ ❱ 771 │ │ │ raise self._value │
│ 772 │ │
│ 773 │ def _set(self, i, obj): │
│ 774 │ │ self._success, self._value = obj │
│ │
│ ╭────────────────────────────── locals ──────────────────────────────╮ │
│ │ self = <multiprocess.pool.ApplyResult object at 0x7f2cc1f79c10> │ │
│ │ timeout = None │ │
│ ╰────────────────────────────────────────────────────────────────────╯ │
DatasetGenerationError: An error occurred while generating the dataset
```
### Steps to reproduce the bug
same as above
### Expected behavior
should handle error and continue reading remaining files
### Environment info
python 3.9 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7061/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7061/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7060 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7060/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7060/comments | https://api.github.com/repos/huggingface/datasets/issues/7060/events | https://github.com/huggingface/datasets/pull/7060 | 2,423,188,419 | PR_kwDODunzps52G71g | 7,060 | WebDataset BuilderConfig | {
"login": "hlky",
"id": 106811348,
"node_id": "U_kgDOBl3P1A",
"avatar_url": "https://avatars.githubusercontent.com/u/106811348?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hlky",
"html_url": "https://github.com/hlky",
"followers_url": "https://api.github.com/users/hlky/followers",
"following_url": "https://api.github.com/users/hlky/following{/other_user}",
"gists_url": "https://api.github.com/users/hlky/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hlky/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hlky/subscriptions",
"organizations_url": "https://api.github.com/users/hlky/orgs",
"repos_url": "https://api.github.com/users/hlky/repos",
"events_url": "https://api.github.com/users/hlky/events{/privacy}",
"received_events_url": "https://api.github.com/users/hlky/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2024-07-22T15:41:07 | 2024-07-23T13:28:44 | 2024-07-23T13:28:44 | NONE | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7060",
"html_url": "https://github.com/huggingface/datasets/pull/7060",
"diff_url": "https://github.com/huggingface/datasets/pull/7060.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7060.patch",
"merged_at": null
} | This PR adds `WebDatasetConfig`.
Closes #7055 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7060/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7060/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7059 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7059/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7059/comments | https://api.github.com/repos/huggingface/datasets/issues/7059/events | https://github.com/huggingface/datasets/issues/7059 | 2,422,827,892 | I_kwDODunzps6QaWt0 | 7,059 | None values are skipped when reading jsonl in subobjects | {
"login": "PonteIneptique",
"id": 1929830,
"node_id": "MDQ6VXNlcjE5Mjk4MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1929830?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PonteIneptique",
"html_url": "https://github.com/PonteIneptique",
"followers_url": "https://api.github.com/users/PonteIneptique/followers",
"following_url": "https://api.github.com/users/PonteIneptique/following{/other_user}",
"gists_url": "https://api.github.com/users/PonteIneptique/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PonteIneptique/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PonteIneptique/subscriptions",
"organizations_url": "https://api.github.com/users/PonteIneptique/orgs",
"repos_url": "https://api.github.com/users/PonteIneptique/repos",
"events_url": "https://api.github.com/users/PonteIneptique/events{/privacy}",
"received_events_url": "https://api.github.com/users/PonteIneptique/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2024-07-22T13:02:42 | 2024-07-22T13:02:53 | null | NONE | null | null | ### Describe the bug
I have been fighting against my machine since this morning only to find out this is some kind of a bug.
When loading a dataset composed of `metadata.jsonl`, if you have nullable values (Optional[str]), they can be ignored by the parser, shifting things around.
E.g., let's take this example
Here are two version of a same dataset:
[not-buggy.tar.gz](https://github.com/user-attachments/files/16333532/not-buggy.tar.gz)
[buggy.tar.gz](https://github.com/user-attachments/files/16333553/buggy.tar.gz)
### Steps to reproduce the bug
1. Load the `buggy.tar.gz` dataset
2. Print baseline of `dts = load_dataset("./data")["train"][0]["baselines]`
3. Load the `not-buggy.tar.gz` dataset
4. Print baseline of `dts = load_dataset("./data")["train"][0]["baselines]`
### Expected behavior
Both should have 4 baseline entries:
1. Buggy should have None followed by three lists
2. Non-Buggy should have four lists, and the first one should be an empty list.
One does not work, 2 works. Despite accepting None in another position than the first one.
### Environment info
- `datasets` version: 2.19.1
- Platform: Linux-6.5.0-44-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.23.0
- PyArrow version: 16.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.3.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7059/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7059/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7058 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7058/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7058/comments | https://api.github.com/repos/huggingface/datasets/issues/7058/events | https://github.com/huggingface/datasets/issues/7058 | 2,422,560,355 | I_kwDODunzps6QZVZj | 7,058 | New feature type: Document | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2024-07-22T10:49:20 | 2024-07-22T10:49:20 | null | CONTRIBUTOR | null | null | It would be useful for PDF.
https://github.com/huggingface/dataset-viewer/issues/2991#issuecomment-2242656069 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7058/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7058/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7057 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7057/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7057/comments | https://api.github.com/repos/huggingface/datasets/issues/7057/events | https://github.com/huggingface/datasets/pull/7057 | 2,422,498,520 | PR_kwDODunzps52EjGC | 7,057 | Update load_hub.mdx | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2024-07-22T10:17:46 | 2024-07-22T10:34:14 | 2024-07-22T10:28:10 | CONTRIBUTOR | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7057",
"html_url": "https://github.com/huggingface/datasets/pull/7057",
"diff_url": "https://github.com/huggingface/datasets/pull/7057.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7057.patch",
"merged_at": "2024-07-22T10:28:10"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7057/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7057/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7056 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7056/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7056/comments | https://api.github.com/repos/huggingface/datasets/issues/7056/events | https://github.com/huggingface/datasets/pull/7056 | 2,422,192,257 | PR_kwDODunzps52DgOu | 7,056 | Make `BufferShuffledExamplesIterable` resumable | {
"login": "yzhangcs",
"id": 18402347,
"node_id": "MDQ6VXNlcjE4NDAyMzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/18402347?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yzhangcs",
"html_url": "https://github.com/yzhangcs",
"followers_url": "https://api.github.com/users/yzhangcs/followers",
"following_url": "https://api.github.com/users/yzhangcs/following{/other_user}",
"gists_url": "https://api.github.com/users/yzhangcs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yzhangcs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yzhangcs/subscriptions",
"organizations_url": "https://api.github.com/users/yzhangcs/orgs",
"repos_url": "https://api.github.com/users/yzhangcs/repos",
"events_url": "https://api.github.com/users/yzhangcs/events{/privacy}",
"received_events_url": "https://api.github.com/users/yzhangcs/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2024-07-22T07:50:02 | 2024-07-22T15:37:01 | null | CONTRIBUTOR | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7056",
"html_url": "https://github.com/huggingface/datasets/pull/7056",
"diff_url": "https://github.com/huggingface/datasets/pull/7056.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7056.patch",
"merged_at": null
} | This PR aims to implement a resumable `BufferShuffledExamplesIterable`.
Instead of saving the entire buffer content, which is very memory-intensive, the newly implemented `BufferShuffledExamplesIterable` saves only the minimal state necessary for recovery, e.g., the random generator states and the state of the first example in the buffer dict.
The idea is that since the buffer size is limited, even if the entire buffer is discarded, we can rebuild it as long as the state of the oldest example is recorded. For buffer size $B$, the expected distance between when an example is pushed and when it is yielded is
$d = \sum_{k=1}^{\infty} k\frac{1}{B} (1 - \frac{1}{B} )^{k-1} =B$.
Simulation experiments support these claims:
```py
from random import randint
BUFFER_SIZE = 1024
dists = []
buffer = []
for i in range(10000000):
if i < BUFFER_SIZE:
buffer.append(i)
else:
index = randint(0, BUFFER_SIZE - 1)
dists.append(i - buffer[index])
buffer[index] = i
print(f"MIN DIST: {min(dists)}\nMAX DIST: {max(dists)}\nAVG DIST: {sum(dists) / len(dists):.2f}\n")
```
which produces the following output:
```py
MIN DIST: 1
MAX DIST: 15136
AVG DIST: 1023.95
```
The overall time for reconstructing the buffer and recovery should not be too long.
The following code mimics the cases of resuming online tokenization by `datasets` and `StatefulDataLoader` under distributed scenarios,
```py
import pickle
import time
from itertools import chain
from typing import Any, Dict, List
import torch
from datasets import load_dataset
from torchdata.stateful_dataloader import StatefulDataLoader
from tqdm import tqdm
from transformers import AutoTokenizer, DataCollatorForLanguageModeling
tokenizer = AutoTokenizer.from_pretrained('fla-hub/gla-1.3B-100B')
tokenizer.pad_token = tokenizer.eos_token
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)
torch.manual_seed(42)
def tokenize(examples: Dict[str, List[Any]]) -> Dict[str, List[List[int]]]:
input_ids = tokenizer(examples['text'])['input_ids']
input_ids = list(chain(*input_ids))
total_length = len(input_ids)
chunk_size = 2048
total_length = (total_length // chunk_size) * chunk_size
# the last chunk smaller than chunk_size will be discarded
return {'input_ids': [input_ids[i: i+chunk_size] for i in range(0, total_length, chunk_size)]}
batch_size = 16
num_workers = 5
context_length = 2048
rank = 1
world_size = 32
prefetch_factor = 2
steps = 2048
path = 'fla-hub/slimpajama-test'
dataset = load_dataset(
path=path,
split='train',
streaming=True,
trust_remote_code=True
)
dataset = dataset.map(tokenize, batched=True, remove_columns=next(iter(dataset)).keys())
dataset = dataset.shuffle(seed=42)
loader = StatefulDataLoader(dataset=dataset,
batch_size=batch_size,
collate_fn=data_collator,
num_workers=num_workers,
persistent_workers=False,
prefetch_factor=prefetch_factor)
start = time.time()
for i, batch in tqdm(enumerate(loader)):
if i == 0:
print(f'{i}\n{batch["input_ids"]}')
if i == steps - 1:
print(f'{i}\n{batch["input_ids"]}')
state_dict = loader.state_dict()
if i == steps:
print(f'{i}\n{batch["input_ids"]}')
break
print(f"{time.time() - start:.2f}s elapsed")
print(f"{len(pickle.dumps(state_dict)) / 1024**2:.2f}MB states in total")
for worker in state_dict['_snapshot']['_worker_snapshots'].keys():
print(f"{worker} {len(pickle.dumps(state_dict['_snapshot']['_worker_snapshots'][worker])) / 1024**2:.2f}MB")
print(state_dict['_snapshot']['_worker_snapshots']['worker_0']['dataset_state'])
loader = StatefulDataLoader(dataset=dataset,
batch_size=batch_size,
collate_fn=data_collator,
num_workers=num_workers,
persistent_workers=False,
prefetch_factor=prefetch_factor)
print("Loading state dict")
loader.load_state_dict(state_dict)
start = time.time()
for batch in loader:
print(batch['input_ids'])
break
print(f"{time.time() - start:.2f}s elapsed")
```
and the outputs are
```py
0
tensor([[ 909, 395, 19082, ..., 13088, 16232, 395],
[ 601, 28705, 28770, ..., 28733, 923, 288],
[21753, 15071, 13977, ..., 9369, 28723, 415],
...,
[21763, 28751, 20300, ..., 28781, 28734, 4775],
[ 354, 396, 10214, ..., 298, 429, 28770],
[ 333, 6149, 28768, ..., 2773, 340, 351]])
2047
tensor([[28723, 415, 3889, ..., 272, 3065, 2609],
[ 403, 3214, 3629, ..., 403, 21163, 16434],
[28723, 13, 28749, ..., 28705, 28750, 28734],
...,
[ 2778, 2251, 28723, ..., 354, 684, 429],
[ 5659, 298, 1038, ..., 5290, 297, 22153],
[ 938, 28723, 1537, ..., 9123, 28733, 12154]])
2048
tensor([[ 769, 278, 12531, ..., 28721, 19309, 28739],
[ 415, 23347, 622, ..., 3937, 2426, 28725],
[28745, 4345, 28723, ..., 338, 28725, 583],
...,
[ 1670, 28709, 5809, ..., 28734, 28760, 393],
[ 340, 1277, 624, ..., 325, 28790, 1329],
[ 523, 1144, 3409, ..., 359, 359, 17422]])
65.97s elapsed
0.00MB states in total
worker_0 0.00MB
worker_1 0.00MB
worker_2 0.00MB
worker_3 0.00MB
worker_4 0.00MB
{'ex_iterable': {'ex_iterable': {'shard_idx': 0, 'shard_example_idx': 14000}, 'num_examples_since_previous_state': 166, 'previous_state_example_idx': 7394, 'previous_state': {'shard_idx': 0, 'shard_example_idx': 13000}}, 'num_taken': 6560, 'global_example_idx': 7560, 'buffer_state_dict': {'num_taken': 6560, 'global_example_idx': 356, 'index_offset': 0, 'first_state': {'ex_iterable': {'shard_idx': 0, 'shard_example_idx': 1000}, 'num_examples_since_previous_state': 356, 'previous_state_example_idx': 0, 'previous_state': {'shard_idx': 0, 'shard_example_idx': 0}}, 'bit_generator_state': {'state': {'state': 274674114334540486603088602300644985544, 'inc': 332724090758049132448979897138935081983}, 'bit_generator': 'PCG64', 'has_uint32': 0, 'uinteger': 0}}}
Loading state dict
tensor([[ 769, 278, 12531, ..., 28721, 19309, 28739],
[ 415, 23347, 622, ..., 3937, 2426, 28725],
[28745, 4345, 28723, ..., 338, 28725, 583],
...,
[ 1670, 28709, 5809, ..., 28734, 28760, 393],
[ 340, 1277, 624, ..., 325, 28790, 1329],
[ 523, 1144, 3409, ..., 359, 359, 17422]])
24.60s elapsed
```
Not sure if this PR complies with the `datasets` code style. Looking for your help @lhoestq, also very willing to further improve the code if any suggestions are given.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7056/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7056/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7055 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7055/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7055/comments | https://api.github.com/repos/huggingface/datasets/issues/7055/events | https://github.com/huggingface/datasets/issues/7055 | 2,421,708,891 | I_kwDODunzps6QWFhb | 7,055 | WebDataset with different prefixes are unsupported | {
"login": "hlky",
"id": 106811348,
"node_id": "U_kgDOBl3P1A",
"avatar_url": "https://avatars.githubusercontent.com/u/106811348?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hlky",
"html_url": "https://github.com/hlky",
"followers_url": "https://api.github.com/users/hlky/followers",
"following_url": "https://api.github.com/users/hlky/following{/other_user}",
"gists_url": "https://api.github.com/users/hlky/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hlky/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hlky/subscriptions",
"organizations_url": "https://api.github.com/users/hlky/orgs",
"repos_url": "https://api.github.com/users/hlky/repos",
"events_url": "https://api.github.com/users/hlky/events{/privacy}",
"received_events_url": "https://api.github.com/users/hlky/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2024-07-22T01:14:19 | 2024-07-24T13:26:30 | 2024-07-23T13:28:46 | NONE | null | null | ### Describe the bug
Consider a WebDataset with multiple images for each item where the number of images may vary: [example](https://huggingface.co/datasets/bigdata-pw/fashion-150k)
Due to this [code](https://github.com/huggingface/datasets/blob/87f4c2088854ff33e817e724e75179e9975c1b02/src/datasets/packaged_modules/webdataset/webdataset.py#L76-L80) an error is given.
```
The TAR archives of the dataset should be in WebDataset format, but the files in the archive don't share the same prefix or the same types.
```
The purpose of this check is unclear because PyArrow supports different keys.
Removing the check allows the dataset to be loaded and there's no issue when iterating through the dataset.
```
>>> from datasets import load_dataset
>>> path = "shards/*.tar"
>>> dataset = load_dataset("webdataset", data_files={"train": path}, split="train", streaming=True)
Resolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 152/152 [00:00<00:00, 56458.93it/s]
>>> dataset
IterableDataset({
features: ['__key__', '__url__', '1.jpg', '2.jpg', '3.jpg', '4.jpg', 'json'],
n_shards: 152
})
```
### Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset("bigdata-pw/fashion-150k")
```
### Expected behavior
Dataset loads without error
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-5.14.0-467.el9.x86_64-x86_64-with-glibc2.34
- Python version: 3.9.19
- `huggingface_hub` version: 0.23.4
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7055/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7055/timeline | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/7054 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7054/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7054/comments | https://api.github.com/repos/huggingface/datasets/issues/7054/events | https://github.com/huggingface/datasets/pull/7054 | 2,418,548,995 | PR_kwDODunzps514T1f | 7,054 | Add batching to `IterableDataset` | {
"login": "lappemic",
"id": 61876623,
"node_id": "MDQ6VXNlcjYxODc2NjIz",
"avatar_url": "https://avatars.githubusercontent.com/u/61876623?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lappemic",
"html_url": "https://github.com/lappemic",
"followers_url": "https://api.github.com/users/lappemic/followers",
"following_url": "https://api.github.com/users/lappemic/following{/other_user}",
"gists_url": "https://api.github.com/users/lappemic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lappemic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lappemic/subscriptions",
"organizations_url": "https://api.github.com/users/lappemic/orgs",
"repos_url": "https://api.github.com/users/lappemic/repos",
"events_url": "https://api.github.com/users/lappemic/events{/privacy}",
"received_events_url": "https://api.github.com/users/lappemic/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2024-07-19T10:11:47 | 2024-07-23T13:25:13 | 2024-07-23T10:34:28 | CONTRIBUTOR | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7054",
"html_url": "https://github.com/huggingface/datasets/pull/7054",
"diff_url": "https://github.com/huggingface/datasets/pull/7054.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7054.patch",
"merged_at": "2024-07-23T10:34:28"
} | I've taken a try at implementing a batched `IterableDataset` as requested in issue #6279. This PR adds a new `BatchedExamplesIterable` class and a `.batch()` method to the `IterableDataset` class.
The main changes are:
1. A new `BatchedExamplesIterable` that groups examples into batches.
2. A `.batch()` method for `IterableDataset` to easily create batched versions.
3. Support for shuffling and sharding to work with PyTorch DataLoader and multiple workers.
I'm not sure if this is exactly what you had in mind and also have not fully tested it atm, so I'd really appreciate your feedback. Does this seem like it's heading in the right direction? I'm happy to make any changes or explore different approaches if needed.
Pinging @lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7054/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7054/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7053 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7053/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7053/comments | https://api.github.com/repos/huggingface/datasets/issues/7053/events | https://github.com/huggingface/datasets/issues/7053 | 2,416,423,791 | I_kwDODunzps6QB7Nv | 7,053 | Datasets.datafiles resolve_pattern `TypeError: can only concatenate tuple (not "str") to tuple` | {
"login": "MatthewYZhang",
"id": 48289218,
"node_id": "MDQ6VXNlcjQ4Mjg5MjE4",
"avatar_url": "https://avatars.githubusercontent.com/u/48289218?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MatthewYZhang",
"html_url": "https://github.com/MatthewYZhang",
"followers_url": "https://api.github.com/users/MatthewYZhang/followers",
"following_url": "https://api.github.com/users/MatthewYZhang/following{/other_user}",
"gists_url": "https://api.github.com/users/MatthewYZhang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MatthewYZhang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MatthewYZhang/subscriptions",
"organizations_url": "https://api.github.com/users/MatthewYZhang/orgs",
"repos_url": "https://api.github.com/users/MatthewYZhang/repos",
"events_url": "https://api.github.com/users/MatthewYZhang/events{/privacy}",
"received_events_url": "https://api.github.com/users/MatthewYZhang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | [
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2024-07-18T13:42:35 | 2024-07-18T15:17:42 | 2024-07-18T15:16:18 | NONE | null | null | ### Describe the bug
in data_files.py, line 332,
`fs, _, _ = get_fs_token_paths(pattern, storage_options=storage_options)`
If we run the code on AWS, as fs.protocol will be a tuple like: `('file', 'local')`
So, `isinstance(fs.protocol, str) == False` and
`protocol_prefix = fs.protocol + "://" if fs.protocol != "file" else ""` will raise
`TypeError: can only concatenate tuple (not "str") to tuple`.
### Steps to reproduce the bug
Steps to reproduce:
1. Run on a cloud server like AWS,
2. `import datasets.data_files as datafile`
3. datafile.resolve_pattern('path/to/dataset', '.')
4. `TypeError: can only concatenate tuple (not "str") to tuple`
### Expected behavior
Should return path of the dataset, with fs.protocol at the beginning
### Environment info
- `datasets` version: 2.14.0
- Platform: Linux-3.10.0-1160.119.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.19
- Huggingface_hub version: 0.23.5
- PyArrow version: 16.1.0
- Pandas version: 1.1.5 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7053/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7053/timeline | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/7052 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7052/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7052/comments | https://api.github.com/repos/huggingface/datasets/issues/7052/events | https://github.com/huggingface/datasets/pull/7052 | 2,411,682,730 | PR_kwDODunzps51iuop | 7,052 | Adding `Music` feature for symbolic music modality (MIDI, abc) | {
"login": "Natooz",
"id": 56734983,
"node_id": "MDQ6VXNlcjU2NzM0OTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/56734983?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Natooz",
"html_url": "https://github.com/Natooz",
"followers_url": "https://api.github.com/users/Natooz/followers",
"following_url": "https://api.github.com/users/Natooz/following{/other_user}",
"gists_url": "https://api.github.com/users/Natooz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Natooz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Natooz/subscriptions",
"organizations_url": "https://api.github.com/users/Natooz/orgs",
"repos_url": "https://api.github.com/users/Natooz/repos",
"events_url": "https://api.github.com/users/Natooz/events{/privacy}",
"received_events_url": "https://api.github.com/users/Natooz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2024-07-16T17:26:04 | 2024-07-29T06:47:55 | 2024-07-29T06:47:55 | NONE | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7052",
"html_url": "https://github.com/huggingface/datasets/pull/7052",
"diff_url": "https://github.com/huggingface/datasets/pull/7052.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7052.patch",
"merged_at": null
} | ⚠️ (WIP) ⚠️
### What this PR does
This PR adds a `Music` feature for the symbolic music modality, in particular [MIDI](https://en.wikipedia.org/wiki/Musical_Instrument_Digital_Interface) and [abc](https://en.wikipedia.org/wiki/ABC_notation) files.
### Motivations
These two file formats are widely used in the [Music Information Retrieval (MIR)](https://en.wikipedia.org/wiki/Music_information_retrieval) for tasks such as music generation, music transcription, music synthesis or music transcription. Having a dedicated feature in the datasets library would allow to both encourage researchers to share datasets of this modality as well as making them more easily usable for end users, benefitting from the perks of the library.
These file formats are supported by [symusic](https://github.com/Yikai-Liao/symusic), a lightweight Python library with C bindings (using nanobind) allowing to efficiently read, write and manipulate them. The library is actively developed, and can in the future also implement other file formats such as [musicXML](https://en.wikipedia.org/wiki/MusicXML). As such, this PR relies on it.
The music data can then easily be tokenized with appropriate tokenizers such as [MidiTok](https://github.com/Natooz/MidiTok) or converted to pianorolls matrices by symusic.
**Jul 16th 2024:**
* the tests for the `Music` feature are currently failing due to non-supported access to the LazyBatch in `test_dataset_with_music_feature_map` and `test_dataset_with_music_feature_map_resample_music` (see TODOs). I am a beginner with pyArrow, I'll take any advice to make this work;
* additional tests including the `Music` feature with parquet and WebDataset should be implemented. As of right now, I am waiting for your feedback before taking further steps;
* a `MusicFolder` should also be implemented to comply with the usages of the `Image` and `Audio` features, waiting for your feedback too.
CCing @lhoestq and @albertvillanova | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7052/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7052/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7051 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7051/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7051/comments | https://api.github.com/repos/huggingface/datasets/issues/7051/events | https://github.com/huggingface/datasets/issues/7051 | 2,409,353,929 | I_kwDODunzps6Pm9LJ | 7,051 | How to set_epoch with interleave_datasets? | {
"login": "jonathanasdf",
"id": 511073,
"node_id": "MDQ6VXNlcjUxMTA3Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jonathanasdf",
"html_url": "https://github.com/jonathanasdf",
"followers_url": "https://api.github.com/users/jonathanasdf/followers",
"following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}",
"gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions",
"organizations_url": "https://api.github.com/users/jonathanasdf/orgs",
"repos_url": "https://api.github.com/users/jonathanasdf/repos",
"events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}",
"received_events_url": "https://api.github.com/users/jonathanasdf/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2024-07-15T18:24:52 | 2024-07-22T16:52:07 | null | NONE | null | null | Let's say I have dataset A which has 100k examples, and dataset B which has 100m examples.
I want to train on an interleaved dataset of A+B, with stopping_strategy='all_exhausted' so dataset B doesn't repeat any examples. But every time A is exhausted I want it to be reshuffled (eg. calling set_epoch)
Of course I want to interleave as IterableDatasets / streaming mode so B doesn't have to get tokenized completely at the start.
How could I achieve this? I was thinking something like, if I wrap dataset A in some new IterableDataset with from_generator() and manually call set_epoch before interleaving it? But I'm not sure how to keep the number of shards in that dataset...
Something like
```
dataset_a = load_dataset(...)
dataset_b = load_dataset(...)
def epoch_shuffled_dataset(ds):
# How to make this maintain the number of shards in ds??
for epoch in itertools.count():
ds.set_epoch(epoch)
yield from iter(ds)
shuffled_dataset_a = IterableDataset.from_generator(epoch_shuffled_dataset, gen_kwargs={'ds': dataset_a})
interleaved = interleave_datasets([shuffled_dataset_a, dataset_b], probs, stopping_strategy='all_exhausted')
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7051/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/7051/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7050 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7050/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7050/comments | https://api.github.com/repos/huggingface/datasets/issues/7050/events | https://github.com/huggingface/datasets/pull/7050 | 2,409,048,733 | PR_kwDODunzps51Z1Yp | 7,050 | add checkpoint and resume title in docs | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2024-07-15T15:38:04 | 2024-07-15T16:06:15 | 2024-07-15T15:59:56 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7050",
"html_url": "https://github.com/huggingface/datasets/pull/7050",
"diff_url": "https://github.com/huggingface/datasets/pull/7050.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7050.patch",
"merged_at": "2024-07-15T15:59:56"
} | (minor) just to make it more prominent in the docs page for the soon-to-be-released new torchdata | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7050/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7050/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7049 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7049/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7049/comments | https://api.github.com/repos/huggingface/datasets/issues/7049/events | https://github.com/huggingface/datasets/issues/7049 | 2,408,514,366 | I_kwDODunzps6PjwM- | 7,049 | Save nparray as list | {
"login": "Sakurakdx",
"id": 48399040,
"node_id": "MDQ6VXNlcjQ4Mzk5MDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/48399040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sakurakdx",
"html_url": "https://github.com/Sakurakdx",
"followers_url": "https://api.github.com/users/Sakurakdx/followers",
"following_url": "https://api.github.com/users/Sakurakdx/following{/other_user}",
"gists_url": "https://api.github.com/users/Sakurakdx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sakurakdx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sakurakdx/subscriptions",
"organizations_url": "https://api.github.com/users/Sakurakdx/orgs",
"repos_url": "https://api.github.com/users/Sakurakdx/repos",
"events_url": "https://api.github.com/users/Sakurakdx/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sakurakdx/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2024-07-15T11:36:11 | 2024-07-18T11:33:34 | 2024-07-18T11:33:34 | NONE | null | null | ### Describe the bug
When I use the `map` function to convert images into features, datasets saves nparray as a list. Some people use the `set_format` function to convert the column back, but doesn't this lose precision?
### Steps to reproduce the bug
the map function
```python
def convert_image_to_features(inst, processor, image_dir):
image_file = inst["image_url"]
file = image_file.split("/")[-1]
image_path = os.path.join(image_dir, file)
image = Image.open(image_path)
image = image.convert("RGBA")
inst["pixel_values"] = processor(images=image, return_tensors="np")["pixel_values"]
return inst
```
main function
```python
map_fun = partial(
convert_image_to_features, processor=processor, image_dir=image_dir
)
ds = ds.map(map_fun, batched=False, num_proc=20)
print(type(ds[0]["pixel_values"])
```
### Expected behavior
(type < list>)
### Environment info
- `datasets` version: 2.16.1
- Platform: Linux-4.19.91-009.ali4000.alios7.x86_64-x86_64-with-glibc2.35
- Python version: 3.11.5
- `huggingface_hub` version: 0.23.4
- PyArrow version: 14.0.2
- Pandas version: 2.1.4
- `fsspec` version: 2023.10.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7049/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7049/timeline | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/7048 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7048/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7048/comments | https://api.github.com/repos/huggingface/datasets/issues/7048/events | https://github.com/huggingface/datasets/issues/7048 | 2,408,487,547 | I_kwDODunzps6Pjpp7 | 7,048 | ImportError: numpy.core.multiarray when using `filter` | {
"login": "kamilakesbi",
"id": 45195979,
"node_id": "MDQ6VXNlcjQ1MTk1OTc5",
"avatar_url": "https://avatars.githubusercontent.com/u/45195979?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kamilakesbi",
"html_url": "https://github.com/kamilakesbi",
"followers_url": "https://api.github.com/users/kamilakesbi/followers",
"following_url": "https://api.github.com/users/kamilakesbi/following{/other_user}",
"gists_url": "https://api.github.com/users/kamilakesbi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kamilakesbi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kamilakesbi/subscriptions",
"organizations_url": "https://api.github.com/users/kamilakesbi/orgs",
"repos_url": "https://api.github.com/users/kamilakesbi/repos",
"events_url": "https://api.github.com/users/kamilakesbi/events{/privacy}",
"received_events_url": "https://api.github.com/users/kamilakesbi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2024-07-15T11:21:04 | 2024-07-16T10:11:25 | 2024-07-16T10:11:25 | NONE | null | null | ### Describe the bug
I can't apply the filter method on my dataset.
### Steps to reproduce the bug
The following snippet generates a bug:
```python
from datasets import load_dataset
ami = load_dataset('kamilakesbi/ami', 'ihm')
ami['train'].filter(
lambda example: example["file_name"] == 'EN2001a'
)
```
I get the following error:
`ImportError: numpy.core.multiarray failed to import (auto-generated because you didn't call 'numpy.import_array()' after cimporting numpy; use '<void>numpy._import_array' to disable if you are certain you don't need it).`
### Expected behavior
It should work properly!
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- `huggingface_hub` version: 0.23.4
- PyArrow version: 16.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7048/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7048/timeline | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/7047 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7047/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7047/comments | https://api.github.com/repos/huggingface/datasets/issues/7047/events | https://github.com/huggingface/datasets/issues/7047 | 2,406,495,084 | I_kwDODunzps6PcDNs | 7,047 | Save Dataset as Sharded Parquet | {
"login": "tom-p-reichel",
"id": 43631024,
"node_id": "MDQ6VXNlcjQzNjMxMDI0",
"avatar_url": "https://avatars.githubusercontent.com/u/43631024?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tom-p-reichel",
"html_url": "https://github.com/tom-p-reichel",
"followers_url": "https://api.github.com/users/tom-p-reichel/followers",
"following_url": "https://api.github.com/users/tom-p-reichel/following{/other_user}",
"gists_url": "https://api.github.com/users/tom-p-reichel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tom-p-reichel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tom-p-reichel/subscriptions",
"organizations_url": "https://api.github.com/users/tom-p-reichel/orgs",
"repos_url": "https://api.github.com/users/tom-p-reichel/repos",
"events_url": "https://api.github.com/users/tom-p-reichel/events{/privacy}",
"received_events_url": "https://api.github.com/users/tom-p-reichel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | [
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2024-07-12T23:47:51 | 2024-07-17T12:07:08 | null | NONE | null | null | ### Feature request
`to_parquet` currently saves the dataset as one massive, monolithic parquet file, rather than as several small parquet files. It should shard large datasets automatically.
### Motivation
This default behavior makes me very sad because a program I ran for 6 hours saved its results using `to_parquet`, putting the entire billion+ row dataset into a 171 GB *single shard parquet file* which pyarrow, apache spark, etc. all cannot work with without completely exhausting the memory of my system. I was previously able to work with larger-than-memory parquet files, but not this one. I *assume* the reason why this is happening is because it is a single shard. Making sharding the default behavior puts datasets in parity with other frameworks, such as spark, which automatically shard when a large dataset is saved as parquet.
### Your contribution
I could change the logic here https://github.com/huggingface/datasets/blob/bf6f41e94d9b2f1c620cf937a2e85e5754a8b960/src/datasets/io/parquet.py#L109-L158
to use `pyarrow.dataset.write_dataset`, which seems to support sharding, or periodically open new files. We would only shard if the user passed in a path rather than file handle. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7047/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7047/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7046 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7046/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7046/comments | https://api.github.com/repos/huggingface/datasets/issues/7046/events | https://github.com/huggingface/datasets/pull/7046 | 2,405,485,582 | PR_kwDODunzps51N05n | 7,046 | Support librosa and numpy 2.0 for Python 3.10 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2024-07-12T12:42:47 | 2024-07-12T13:04:40 | 2024-07-12T12:58:17 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7046",
"html_url": "https://github.com/huggingface/datasets/pull/7046",
"diff_url": "https://github.com/huggingface/datasets/pull/7046.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7046.patch",
"merged_at": "2024-07-12T12:58:17"
} | Support librosa and numpy 2.0 for Python 3.10 by installing soxr 0.4.0b1 pre-release:
- https://github.com/dofuuz/python-soxr/releases/tag/v0.4.0b1
- https://github.com/dofuuz/python-soxr/issues/28 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7046/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7046/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7045 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7045/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7045/comments | https://api.github.com/repos/huggingface/datasets/issues/7045/events | https://github.com/huggingface/datasets/pull/7045 | 2,405,447,858 | PR_kwDODunzps51Nsie | 7,045 | Fix tensorflow min version depending on Python version | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2024-07-12T12:20:23 | 2024-07-12T12:38:53 | 2024-07-12T12:33:00 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7045",
"html_url": "https://github.com/huggingface/datasets/pull/7045",
"diff_url": "https://github.com/huggingface/datasets/pull/7045.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7045.patch",
"merged_at": "2024-07-12T12:33:00"
} | Fix tensorflow min version depending on Python version.
Related to:
- #6991 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7045/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7045/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7044 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7044/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7044/comments | https://api.github.com/repos/huggingface/datasets/issues/7044/events | https://github.com/huggingface/datasets/pull/7044 | 2,405,002,987 | PR_kwDODunzps51MLbh | 7,044 | Mark tests that require librosa | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2024-07-12T08:06:59 | 2024-07-12T09:06:32 | 2024-07-12T09:00:09 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7044",
"html_url": "https://github.com/huggingface/datasets/pull/7044",
"diff_url": "https://github.com/huggingface/datasets/pull/7044.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7044.patch",
"merged_at": "2024-07-12T09:00:09"
} | Mark tests that require `librosa`.
Note that `librosa` is an optional dependency (installed with `audio` option) and we should be able to test environments without that library installed. This is the case if we want to test Numpy 2.0, which is currently incompatible with `librosa` due to its dependency on `soxr`:
- https://github.com/dofuuz/python-soxr/issues/28 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7044/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7044/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7043 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7043/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7043/comments | https://api.github.com/repos/huggingface/datasets/issues/7043/events | https://github.com/huggingface/datasets/pull/7043 | 2,404,951,714 | PR_kwDODunzps51MAN0 | 7,043 | Add decorator as explicit test dependency | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2024-07-12T07:35:23 | 2024-07-12T08:12:55 | 2024-07-12T08:07:10 | MEMBER | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7043",
"html_url": "https://github.com/huggingface/datasets/pull/7043",
"diff_url": "https://github.com/huggingface/datasets/pull/7043.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7043.patch",
"merged_at": "2024-07-12T08:07:10"
} | Add decorator as explicit test dependency.
We use `decorator` library in our CI test since PR:
- #4845
However we did not add it as an explicit test requirement, and we depended on it indirectly through other libraries' dependencies.
I discovered this while testing Numpy 2.0 and removing incompatible libraries. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7043/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7043/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7042 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7042/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7042/comments | https://api.github.com/repos/huggingface/datasets/issues/7042/events | https://github.com/huggingface/datasets/pull/7042 | 2,404,605,836 | PR_kwDODunzps51K8CM | 7,042 | Improved the tutorial by adding a link for loading datasets | {
"login": "AmboThom",
"id": 41874659,
"node_id": "MDQ6VXNlcjQxODc0NjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/41874659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmboThom",
"html_url": "https://github.com/AmboThom",
"followers_url": "https://api.github.com/users/AmboThom/followers",
"following_url": "https://api.github.com/users/AmboThom/following{/other_user}",
"gists_url": "https://api.github.com/users/AmboThom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AmboThom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmboThom/subscriptions",
"organizations_url": "https://api.github.com/users/AmboThom/orgs",
"repos_url": "https://api.github.com/users/AmboThom/repos",
"events_url": "https://api.github.com/users/AmboThom/events{/privacy}",
"received_events_url": "https://api.github.com/users/AmboThom/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2024-07-12T03:49:54 | 2024-07-12T03:49:54 | null | NONE | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7042",
"html_url": "https://github.com/huggingface/datasets/pull/7042",
"diff_url": "https://github.com/huggingface/datasets/pull/7042.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7042.patch",
"merged_at": null
} | Improved the tutorial by letting readers know about loading datasets with common files and including a link. I left the local files section alone because the methods were already listed with code snippets. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7042/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7042/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7041 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7041/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7041/comments | https://api.github.com/repos/huggingface/datasets/issues/7041/events | https://github.com/huggingface/datasets/issues/7041 | 2,404,576,038 | I_kwDODunzps6PUusm | 7,041 | `sort` after `filter` unreasonably slow | {
"login": "Tobin-rgb",
"id": 56711045,
"node_id": "MDQ6VXNlcjU2NzExMDQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/56711045?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tobin-rgb",
"html_url": "https://github.com/Tobin-rgb",
"followers_url": "https://api.github.com/users/Tobin-rgb/followers",
"following_url": "https://api.github.com/users/Tobin-rgb/following{/other_user}",
"gists_url": "https://api.github.com/users/Tobin-rgb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Tobin-rgb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tobin-rgb/subscriptions",
"organizations_url": "https://api.github.com/users/Tobin-rgb/orgs",
"repos_url": "https://api.github.com/users/Tobin-rgb/repos",
"events_url": "https://api.github.com/users/Tobin-rgb/events{/privacy}",
"received_events_url": "https://api.github.com/users/Tobin-rgb/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2024-07-12T03:29:27 | 2024-07-22T13:55:17 | null | NONE | null | null | ### Describe the bug
as the tittle says ...
### Steps to reproduce the bug
`sort` seems to be normal.
```python
from datasets import Dataset
import random
nums = [{"k":random.choice(range(0,1000))} for _ in range(100000)]
ds = Dataset.from_list(nums)
print("start sort")
ds = ds.sort("k")
print("finish sort")
```
but `sort` after `filter` is extremely slow.
```python
from datasets import Dataset
import random
nums = [{"k":random.choice(range(0,1000))} for _ in range(100000)]
ds = Dataset.from_list(nums)
ds = ds.filter(lambda x:x > 100, input_columns="k")
print("start sort")
ds = ds.sort("k")
print("finish sort")
```
### Expected behavior
Is this a bug, or is it a misuse of the `sort` function?
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-3.10.0-1127.19.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.10.13
- `huggingface_hub` version: 0.23.4
- PyArrow version: 16.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2023.10.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7041/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7041/timeline | null | false |
End of preview. Expand
in Dataset Viewer.
Dataset Card for GitHub Issues
Dataset Summary
GitHub Issues is a dataset consisting of GitHub issues and pull requests associated with the 🤗 Datasets repository. It can be used for semantic search or multilabel text classification.
- Downloads last month
- 36