url
stringlengths 61
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 75
75
| comments_url
stringlengths 70
70
| events_url
stringlengths 68
68
| html_url
stringlengths 49
51
| id
int64 1.68B
1.88B
| node_id
stringlengths 18
19
| number
int64 5.79k
6.2k
| title
stringlengths 1
280
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
null | comments
int64 0
44
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 3
values | active_lock_reason
null | body
stringlengths 3
17.6k
⌀ | reactions
dict | timeline_url
stringlengths 70
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/6203 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6203/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6203/comments | https://api.github.com/repos/huggingface/datasets/issues/6203/events | https://github.com/huggingface/datasets/issues/6203 | 1,877,491,602 | I_kwDODunzps5v6D-S | 6,203 | Support loading from a DVC remote repository | {
"login": "bilelomrani1",
"id": 16692099,
"node_id": "MDQ6VXNlcjE2NjkyMDk5",
"avatar_url": "https://avatars.githubusercontent.com/u/16692099?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bilelomrani1",
"html_url": "https://github.com/bilelomrani1",
"followers_url": "https://api.github.com/users/bilelomrani1/followers",
"following_url": "https://api.github.com/users/bilelomrani1/following{/other_user}",
"gists_url": "https://api.github.com/users/bilelomrani1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bilelomrani1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bilelomrani1/subscriptions",
"organizations_url": "https://api.github.com/users/bilelomrani1/orgs",
"repos_url": "https://api.github.com/users/bilelomrani1/repos",
"events_url": "https://api.github.com/users/bilelomrani1/events{/privacy}",
"received_events_url": "https://api.github.com/users/bilelomrani1/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2023-09-01T14:04:52 | 2023-09-01T14:04:52 | null | NONE | null | ### Feature request
Adding support for loading a file from a DVC repository, tracked remotely on a SCM.
### Motivation
DVC is a popular version control system to version and manage datasets. The files are stored on a remote object storage platform, but they are tracked using Git. Integration with DVC is possible through the `DVCFileSystem`.
I have a Gitlab repository where multiple files are tracked using DVC and stored in a GCP bucket. I would like to be able to load these files using `datasets` directly using an URL. My goal is to write a generic code that abstracts the storage layer, such that my users will only have to pass in an `fsspec`-compliant URL and the corresponding files will be loaded.
### Your contribution
I managed to instantiate a `DVCFileSystem` pointing to a Gitlab repo from a `fsspec` chained URL in [this pull request](https://github.com/iterative/dvc/pull/9903) to DVC.
```python
from fsspec.core import url_to_fs
fs, _ = url_to_fs("dvc::https://gitlab.com/repository/group/my-repo")
```
From now I'm not sure how to continue, it seems that `datasets` expects the URL to be fully qualified like so: `dvc::https://gitlab.com/repository/group/my-repo/my-folder/my-file.json` but this fails because `DVCFileSystem` expects the URL to point to the root of an SCM repo. Is there a way to make this work with `datasets`? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6203/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6203/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6202 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6202/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6202/comments | https://api.github.com/repos/huggingface/datasets/issues/6202/events | https://github.com/huggingface/datasets/issues/6202 | 1,876,630,351 | I_kwDODunzps5v2xtP | 6,202 | avoid downgrading jax version | {
"login": "chrisflesher",
"id": 1332458,
"node_id": "MDQ6VXNlcjEzMzI0NTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1332458?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chrisflesher",
"html_url": "https://github.com/chrisflesher",
"followers_url": "https://api.github.com/users/chrisflesher/followers",
"following_url": "https://api.github.com/users/chrisflesher/following{/other_user}",
"gists_url": "https://api.github.com/users/chrisflesher/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chrisflesher/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chrisflesher/subscriptions",
"organizations_url": "https://api.github.com/users/chrisflesher/orgs",
"repos_url": "https://api.github.com/users/chrisflesher/repos",
"events_url": "https://api.github.com/users/chrisflesher/events{/privacy}",
"received_events_url": "https://api.github.com/users/chrisflesher/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2023-09-01T02:57:57 | 2023-09-01T02:58:53 | null | NONE | null | ### Feature request
Whenever I `pip install datasets[jax]` it downgrades jax to version 0.3.25. I seem to be able to install this library first then upgrade jax back to version 0.4.13.
### Motivation
It would be nice to not overwrite currently installed version of jax if possible.
### Your contribution
I would be willing to beta test. Or maybe write some code if I could get pointed in the right direction, I'm not super familiar with this codebase. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6202/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6202/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6201 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6201/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6201/comments | https://api.github.com/repos/huggingface/datasets/issues/6201/events | https://github.com/huggingface/datasets/pull/6201 | 1,875,256,775 | PR_kwDODunzps5ZOVbV | 6,201 | Fix to_json ValueError and remove pandas pin | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 3 | 2023-08-31T10:38:08 | 2023-08-31T14:08:51 | null | MEMBER | null | This PR fixes the root cause of the issue:
- #6197
This PR also removes the temporary pin of `pandas` introduced by:
- #6200
Note that for orient in ['records', 'values'], index value is ignored but
- in `pandas` < 2.1.0, a ValueError is raised if not index and orient not in ['split', 'table']
- for orient = 'records', we need index = True
- default index value is True
- in `pandas` = 2.1.0, a ValueError is raised if index is True and orient in ['records', 'values']
- for orient = 'records', we need index = False or None
- default index value is None
This PR fixes the issue by not passing index and thus using default index value (valid for all pandas versions), unless orient is 'split' or 'table' (where we pass index = False, as it was done before this fix). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6201/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6201/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6201",
"html_url": "https://github.com/huggingface/datasets/pull/6201",
"diff_url": "https://github.com/huggingface/datasets/pull/6201.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6201.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6200 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6200/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6200/comments | https://api.github.com/repos/huggingface/datasets/issues/6200/events | https://github.com/huggingface/datasets/pull/6200 | 1,875,169,551 | PR_kwDODunzps5ZOCee | 6,200 | Temporarily pin pandas < 2.1.0 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-08-31T09:45:17 | 2023-08-31T10:33:24 | 2023-08-31T10:24:38 | MEMBER | null | Temporarily pin `pandas` < 2.1.0 until permanent solution is found.
Hot fix #6197. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6200/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6200/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6200",
"html_url": "https://github.com/huggingface/datasets/pull/6200",
"diff_url": "https://github.com/huggingface/datasets/pull/6200.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6200.patch",
"merged_at": "2023-08-31T10:24:38"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6199 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6199/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6199/comments | https://api.github.com/repos/huggingface/datasets/issues/6199/events | https://github.com/huggingface/datasets/issues/6199 | 1,875,165,185 | I_kwDODunzps5vxMAB | 6,199 | Use load_dataset for local json files, but it not works | {
"login": "Garen-in-bush",
"id": 50519434,
"node_id": "MDQ6VXNlcjUwNTE5NDM0",
"avatar_url": "https://avatars.githubusercontent.com/u/50519434?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Garen-in-bush",
"html_url": "https://github.com/Garen-in-bush",
"followers_url": "https://api.github.com/users/Garen-in-bush/followers",
"following_url": "https://api.github.com/users/Garen-in-bush/following{/other_user}",
"gists_url": "https://api.github.com/users/Garen-in-bush/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Garen-in-bush/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Garen-in-bush/subscriptions",
"organizations_url": "https://api.github.com/users/Garen-in-bush/orgs",
"repos_url": "https://api.github.com/users/Garen-in-bush/repos",
"events_url": "https://api.github.com/users/Garen-in-bush/events{/privacy}",
"received_events_url": "https://api.github.com/users/Garen-in-bush/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2023-08-31T09:42:34 | 2023-08-31T19:05:07 | null | NONE | null | ### Describe the bug
when I use load_dataset to load my local datasets,it always goes to Hugging Face to download the data instead of loading the local dataset.
### Steps to reproduce the bug
`raw_datasets = load_dataset(
‘json’,
data_files=data_files)`
### Expected behavior
![image](https://github.com/huggingface/datasets/assets/50519434/add3747f-6481-4da7-b374-8f81c5a6472c)
### Environment info
python version 3.8.5
datasets version 2.12
os version unbuntu 18.04 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6199/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6199/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6198 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6198/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6198/comments | https://api.github.com/repos/huggingface/datasets/issues/6198/events | https://github.com/huggingface/datasets/pull/6198 | 1,875,092,027 | PR_kwDODunzps5ZNyBq | 6,198 | Preserve split order in DataFilesDict | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-08-31T09:00:26 | 2023-08-31T13:57:31 | 2023-08-31T13:48:42 | MEMBER | null | After investigation, I have found that this copy forces the splits to be sorted alphabetically: https://github.com/huggingface/datasets/blob/029227a116c14720afca71b9b22e78eb2a1c09a6/src/datasets/builder.py#L556
This PR removes the alphabetically sort of `DataFilesDict` keys.
- Note that for a `dict`, the order of keys is relevant when hashing:
```python
hash1 = Hasher.hash({'train': 'train.csv', 'test': 'test.csv'})
hash2 = Hasher.hash({'test': 'test.csv', 'train': 'train.csv'})
assert hash1 != hash2
```
- The `DataFilesDict` is a subclass of `dict`, thus the order should be relevant as well
```python
hash1 = Hasher.hash(DataFilesDict({'train': 'train.csv', 'test': 'test.csv'}))
hash2 = Hasher.hash(DataFilesDict({'test': 'test.csv', 'train': 'train.csv'}))
assert hash1 != hash2
```
Fix #6196. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6198/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6198/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6198",
"html_url": "https://github.com/huggingface/datasets/pull/6198",
"diff_url": "https://github.com/huggingface/datasets/pull/6198.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6198.patch",
"merged_at": "2023-08-31T13:48:42"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6197 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6197/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6197/comments | https://api.github.com/repos/huggingface/datasets/issues/6197/events | https://github.com/huggingface/datasets/issues/6197 | 1,875,078,155 | I_kwDODunzps5vw2wL | 6,197 | ValueError: 'index=True' is only valid when 'orient' is 'split', 'table', 'index', or 'columns' | {
"login": "exs-avianello",
"id": 128361578,
"node_id": "U_kgDOB6akag",
"avatar_url": "https://avatars.githubusercontent.com/u/128361578?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/exs-avianello",
"html_url": "https://github.com/exs-avianello",
"followers_url": "https://api.github.com/users/exs-avianello/followers",
"following_url": "https://api.github.com/users/exs-avianello/following{/other_user}",
"gists_url": "https://api.github.com/users/exs-avianello/gists{/gist_id}",
"starred_url": "https://api.github.com/users/exs-avianello/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/exs-avianello/subscriptions",
"organizations_url": "https://api.github.com/users/exs-avianello/orgs",
"repos_url": "https://api.github.com/users/exs-avianello/repos",
"events_url": "https://api.github.com/users/exs-avianello/events{/privacy}",
"received_events_url": "https://api.github.com/users/exs-avianello/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | 3 | 2023-08-31T08:51:50 | 2023-09-01T10:35:10 | 2023-08-31T10:24:40 | NONE | null | ### Describe the bug
Saving a dataset `.to_json()` fails with a `ValueError` since the latest `pandas` [release](https://pandas.pydata.org/docs/dev/whatsnew/v2.1.0.html) (`2.1.0`)
In their latest release we have:
> Improved error handling when using [DataFrame.to_json()](https://pandas.pydata.org/docs/dev/reference/api/pandas.DataFrame.to_json.html#pandas.DataFrame.to_json) with incompatible index and orient arguments ([GH 52143](https://github.com/pandas-dev/pandas/issues/52143))
i.e. an error is now raised for invalid combinations of `index` and `orient`.
This means that unfortunately the custom logic at this line might sometimes lead to contradictions:
https://github.com/huggingface/datasets/blob/029227a116c14720afca71b9b22e78eb2a1c09a6/src/datasets/io/json.py#L96
e.g. for the default case `orient=records` leads to `index=True`, which now raises a `ValueError`
### Steps to reproduce the bug
```python
import datasets
if __name__ == '__main__':
dataset = datasets.Dataset.from_dict({"A": [1, 2, 3], "B": [4, 5, 6]})
dataset.to_json("dataset.json")
```
```shell
>>>
ValueError: 'index=True' is only valid when 'orient' is 'split', 'table', 'index', or 'columns'.
```
### Expected behavior
The dataset is successfully saved as `.json`
### Environment info
`python >= 3.9`
`pandas >= 2.1.0` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6197/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6197/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6196 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6196/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6196/comments | https://api.github.com/repos/huggingface/datasets/issues/6196/events | https://github.com/huggingface/datasets/issues/6196 | 1,875,070,972 | I_kwDODunzps5vw0_8 | 6,196 | Split order is not preserved | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | 0 | 2023-08-31T08:47:16 | 2023-08-31T13:48:43 | 2023-08-31T13:48:43 | MEMBER | null | I have noticed that in some cases the split order is not preserved.
For example, consider a no-script dataset with configs:
```yaml
configs:
- config_name: default
data_files:
- split: train
path: train.csv
- split: test
path: test.csv
```
- Note the defined split order is [train, test]
Once the dataset is loaded, the split order is not preserved:
```python
In [16]: ds
Out[16]:
DatasetDict({
test: Dataset({
features: ['text', 'label'],
num_rows: 1
})
train: Dataset({
features: ['text', 'label'],
num_rows: 2
})
})
```
- Note the obtained split order is [test, train] | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6196/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6196/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6195 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6195/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6195/comments | https://api.github.com/repos/huggingface/datasets/issues/6195/events | https://github.com/huggingface/datasets/issues/6195 | 1,874,195,585 | I_kwDODunzps5vtfSB | 6,195 | Force to reuse cache at given path | {
"login": "Luosuu",
"id": 43507393,
"node_id": "MDQ6VXNlcjQzNTA3Mzkz",
"avatar_url": "https://avatars.githubusercontent.com/u/43507393?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Luosuu",
"html_url": "https://github.com/Luosuu",
"followers_url": "https://api.github.com/users/Luosuu/followers",
"following_url": "https://api.github.com/users/Luosuu/following{/other_user}",
"gists_url": "https://api.github.com/users/Luosuu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Luosuu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Luosuu/subscriptions",
"organizations_url": "https://api.github.com/users/Luosuu/orgs",
"repos_url": "https://api.github.com/users/Luosuu/repos",
"events_url": "https://api.github.com/users/Luosuu/events{/privacy}",
"received_events_url": "https://api.github.com/users/Luosuu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-08-30T18:44:54 | 2023-08-30T19:00:45 | 2023-08-30T19:00:45 | NONE | null | ### Describe the bug
I have run the official example of MLM like:
```bash
python run_mlm.py \
--model_name_or_path roberta-base \
--dataset_name togethercomputer/RedPajama-Data-1T \
--dataset_config_name arxiv \
--per_device_train_batch_size 10 \
--preprocessing_num_workers 20 \
--validation_split_percentage 0 \
--cache_dir /project/huggingface_cache/datasets \
--line_by_line \
--do_train \
--pad_to_max_length \
--output_dir /project/huggingface_cache/test-mlm
```
it successfully runs and at my cache folder has `cache-1982fea76aa54a13_00001_of_00020.arrow`..... `cache-1982fea76aa54a13_00020_of_00020.arrow ` as tokenization cache of `map` method. And the cache works fine every time I run the command above.
However, when I switched to jupyter notebook (since I do not want to load datasets every time when I changed other parameters not related to the dataloading). It is not recognizing the cache files and starts to re-run the entire tokenization process.
I changed my code to
```python
tokenized_datasets = raw_datasets["train"].map(
tokenize_function,
batched=True,
num_proc=data_args.preprocessing_num_workers,
remove_columns=[text_column_name],
load_from_cache_file=True,
desc="Running tokenizer on dataset line_by_line",
# cache_file_names= {"train": "cache-1982fea76aa54a13.arrow"}
cache_file_name="cache-1982fea76aa54a13.arrow",
new_fingerprint="1982fea76aa54a13"
)
```
it still does not recognize the previously cached files and trying to re-run the tokenization process.
### Steps to reproduce the bug
use jupyter notebook for dataset map function.
### Expected behavior
the map function accepts the given cache_file_name and new_fingerprint then load the previously cached files.
### Environment info
- `datasets` version: 2.14.4.dev0
- Platform: Linux-3.10.0-1160.59.1.el7.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.8
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 2.0.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6195/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6195/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6194 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6194/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6194/comments | https://api.github.com/repos/huggingface/datasets/issues/6194/events | https://github.com/huggingface/datasets/issues/6194 | 1,872,598,223 | I_kwDODunzps5vnZTP | 6,194 | Support custom fingerprinting with `Dataset.from_generator` | {
"login": "bilelomrani1",
"id": 16692099,
"node_id": "MDQ6VXNlcjE2NjkyMDk5",
"avatar_url": "https://avatars.githubusercontent.com/u/16692099?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bilelomrani1",
"html_url": "https://github.com/bilelomrani1",
"followers_url": "https://api.github.com/users/bilelomrani1/followers",
"following_url": "https://api.github.com/users/bilelomrani1/following{/other_user}",
"gists_url": "https://api.github.com/users/bilelomrani1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bilelomrani1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bilelomrani1/subscriptions",
"organizations_url": "https://api.github.com/users/bilelomrani1/orgs",
"repos_url": "https://api.github.com/users/bilelomrani1/repos",
"events_url": "https://api.github.com/users/bilelomrani1/events{/privacy}",
"received_events_url": "https://api.github.com/users/bilelomrani1/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 1 | 2023-08-29T22:43:13 | 2023-08-30T17:33:21 | null | NONE | null | ### Feature request
When using `Dataset.from_generator`, the generator is hashed when building the fingerprint. Similar to `.map`, it would be interesting to let the user bypass this hashing by accepting a `fingerprint` argument to `.from_generator`.
### Motivation
Using the `.from_generator` constructor with a non-picklable generator fails. By accepting a `fingerprint` argument to `.from_generator`, the user would have the opportunity to manually fingerprint the dataset and thus bypass the crash.
### Your contribution
If validated, I can try to submit a PR for this. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6194/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6194/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6193 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6193/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6193/comments | https://api.github.com/repos/huggingface/datasets/issues/6193/events | https://github.com/huggingface/datasets/issues/6193 | 1,872,285,153 | I_kwDODunzps5vmM3h | 6,193 | Dataset loading script method does not work with .pyc file | {
"login": "riteshkumarumassedu",
"id": 43389071,
"node_id": "MDQ6VXNlcjQzMzg5MDcx",
"avatar_url": "https://avatars.githubusercontent.com/u/43389071?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/riteshkumarumassedu",
"html_url": "https://github.com/riteshkumarumassedu",
"followers_url": "https://api.github.com/users/riteshkumarumassedu/followers",
"following_url": "https://api.github.com/users/riteshkumarumassedu/following{/other_user}",
"gists_url": "https://api.github.com/users/riteshkumarumassedu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/riteshkumarumassedu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/riteshkumarumassedu/subscriptions",
"organizations_url": "https://api.github.com/users/riteshkumarumassedu/orgs",
"repos_url": "https://api.github.com/users/riteshkumarumassedu/repos",
"events_url": "https://api.github.com/users/riteshkumarumassedu/events{/privacy}",
"received_events_url": "https://api.github.com/users/riteshkumarumassedu/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 3 | 2023-08-29T19:35:06 | 2023-08-31T19:47:29 | null | NONE | null | ### Describe the bug
The huggingface dataset library specifically looks for ‘.py’ file while loading the dataset using loading script approach and it does not work with ‘.pyc’ file.
While deploying in production, it becomes an issue when we are restricted to use only .pyc files. Is there any work around for this ?
### Steps to reproduce the bug
1. Create a dataset loading script to read the custom data.
2. compile the code to make sure that .pyc file is created
3. Delete the loading script and re-run the code. Usually, python should make use of complied .pyc files. However, in this case, the dataset library errors out with the message that it's unable to find the data loader loading script.
### Expected behavior
The code should make use of .pyc file and run without any error.
### Environment info
NA | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6193/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6193/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6192 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6192/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6192/comments | https://api.github.com/repos/huggingface/datasets/issues/6192/events | https://github.com/huggingface/datasets/pull/6192 | 1,871,911,640 | PR_kwDODunzps5ZDGnI | 6,192 | Set minimal fsspec version requirement to 2023.1.0 | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 5 | 2023-08-29T15:23:41 | 2023-08-30T14:01:56 | 2023-08-30T13:51:32 | CONTRIBUTOR | null | Fix https://github.com/huggingface/datasets/issues/6141
Colab installs 2023.6.0, so we should be good 🙂
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6192/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6192/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6192",
"html_url": "https://github.com/huggingface/datasets/pull/6192",
"diff_url": "https://github.com/huggingface/datasets/pull/6192.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6192.patch",
"merged_at": "2023-08-30T13:51:32"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6191 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6191/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6191/comments | https://api.github.com/repos/huggingface/datasets/issues/6191/events | https://github.com/huggingface/datasets/pull/6191 | 1,871,634,840 | PR_kwDODunzps5ZCKmv | 6,191 | Add missing `revision` argument | {
"login": "qgallouedec",
"id": 45557362,
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qgallouedec",
"html_url": "https://github.com/qgallouedec",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-08-29T13:05:04 | 2023-08-31T14:19:54 | 2023-08-31T13:50:00 | CONTRIBUTOR | null | I've noticed that when you're not working on the main branch, there are sometimes errors in the files returned. After some investigation, I realized that the revision was not properly passed everywhere. This PR proposes a fix. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6191/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6191/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6191",
"html_url": "https://github.com/huggingface/datasets/pull/6191",
"diff_url": "https://github.com/huggingface/datasets/pull/6191.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6191.patch",
"merged_at": "2023-08-31T13:50:00"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6190 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6190/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6190/comments | https://api.github.com/repos/huggingface/datasets/issues/6190/events | https://github.com/huggingface/datasets/issues/6190 | 1,871,582,175 | I_kwDODunzps5vjhPf | 6,190 | `Invalid user token` even when correct user token is passed! | {
"login": "Vaibhavs10",
"id": 18682411,
"node_id": "MDQ6VXNlcjE4NjgyNDEx",
"avatar_url": "https://avatars.githubusercontent.com/u/18682411?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Vaibhavs10",
"html_url": "https://github.com/Vaibhavs10",
"followers_url": "https://api.github.com/users/Vaibhavs10/followers",
"following_url": "https://api.github.com/users/Vaibhavs10/following{/other_user}",
"gists_url": "https://api.github.com/users/Vaibhavs10/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Vaibhavs10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Vaibhavs10/subscriptions",
"organizations_url": "https://api.github.com/users/Vaibhavs10/orgs",
"repos_url": "https://api.github.com/users/Vaibhavs10/repos",
"events_url": "https://api.github.com/users/Vaibhavs10/events{/privacy}",
"received_events_url": "https://api.github.com/users/Vaibhavs10/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-08-29T12:37:03 | 2023-08-29T13:01:10 | 2023-08-29T13:01:09 | MEMBER | null | ### Describe the bug
I'm working on a dataset which comprises other datasets on the hub.
URL: https://huggingface.co/datasets/open-asr-leaderboard/datasets-test-only
Note: Some of the sub-datasets in this metadataset require explicit access.
All the other datasets work fine, except, `common_voice`.
### Steps to reproduce the bug
https://github.com/Vaibhavs10/scratchpad/blob/main/cv_datasets_bug_repro.ipynb
### Expected behavior
It should work if the provided access token is valid (as it does for all the other datasets)
### Environment info
datasets version -> 2.14.4 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6190/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6190/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6189 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6189/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6189/comments | https://api.github.com/repos/huggingface/datasets/issues/6189/events | https://github.com/huggingface/datasets/pull/6189 | 1,871,569,855 | PR_kwDODunzps5ZB8Z9 | 6,189 | Don't alter input in Features.from_dict | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-08-29T12:29:47 | 2023-08-29T13:04:59 | 2023-08-29T12:52:48 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6189/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6189/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6189",
"html_url": "https://github.com/huggingface/datasets/pull/6189",
"diff_url": "https://github.com/huggingface/datasets/pull/6189.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6189.patch",
"merged_at": "2023-08-29T12:52:48"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6188 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6188/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6188/comments | https://api.github.com/repos/huggingface/datasets/issues/6188/events | https://github.com/huggingface/datasets/issues/6188 | 1,870,987,640 | I_kwDODunzps5vhQF4 | 6,188 | [Feature Request] Check the length of batch before writing so that empty batch is allowed | {
"login": "namespace-Pt",
"id": 61188463,
"node_id": "MDQ6VXNlcjYxMTg4NDYz",
"avatar_url": "https://avatars.githubusercontent.com/u/61188463?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/namespace-Pt",
"html_url": "https://github.com/namespace-Pt",
"followers_url": "https://api.github.com/users/namespace-Pt/followers",
"following_url": "https://api.github.com/users/namespace-Pt/following{/other_user}",
"gists_url": "https://api.github.com/users/namespace-Pt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/namespace-Pt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/namespace-Pt/subscriptions",
"organizations_url": "https://api.github.com/users/namespace-Pt/orgs",
"repos_url": "https://api.github.com/users/namespace-Pt/repos",
"events_url": "https://api.github.com/users/namespace-Pt/events{/privacy}",
"received_events_url": "https://api.github.com/users/namespace-Pt/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2023-08-29T06:37:34 | 2023-08-30T13:37:14 | null | NONE | null | ### Use Case
I use `dataset.map(process_fn, batched=True)` to process the dataset, with data **augmentations or filtering**. However, when all examples within a batch is filtered out, i.e. **an empty batch is returned**, the following error will be thrown:
```
ValueError: Schema and number of arrays unequal
```
This is because the empty batch does not comply with the schema of other batches. I think an empty batch should be allowed to facilitate coding (one does not need to assign an empty list manually for all keys.)
A simple fix is to check the length of `batch` before writing:
```
if len(batch):
writer.write_batch(batch)
```
instead of
https://github.com/huggingface/datasets/blob/74d60213dcbd7c99484c62ce1d3dfd90a1df0770/src/datasets/arrow_dataset.py#L3493
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6188/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6188/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6187 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6187/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6187/comments | https://api.github.com/repos/huggingface/datasets/issues/6187/events | https://github.com/huggingface/datasets/issues/6187 | 1,870,936,143 | I_kwDODunzps5vhDhP | 6,187 | Couldn't find a dataset script at /content/tsv/tsv.py or any data file in the same directory | {
"login": "andysingal",
"id": 20493493,
"node_id": "MDQ6VXNlcjIwNDkzNDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andysingal",
"html_url": "https://github.com/andysingal",
"followers_url": "https://api.github.com/users/andysingal/followers",
"following_url": "https://api.github.com/users/andysingal/following{/other_user}",
"gists_url": "https://api.github.com/users/andysingal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andysingal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andysingal/subscriptions",
"organizations_url": "https://api.github.com/users/andysingal/orgs",
"repos_url": "https://api.github.com/users/andysingal/repos",
"events_url": "https://api.github.com/users/andysingal/events{/privacy}",
"received_events_url": "https://api.github.com/users/andysingal/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2023-08-29T05:49:56 | 2023-08-29T16:21:45 | null | NONE | null | ### Describe the bug
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
[<ipython-input-48-6a7b3e847019>](https://localhost:8080/#) in <cell line: 7>()
5 }
6
----> 7 csv_datasets_reloaded = load_dataset("tsv", data_files=data_files)
8 csv_datasets_reloaded
2 frames
[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1489 raise e1 from None
1490 if isinstance(e1, FileNotFoundError):
-> 1491 raise FileNotFoundError(
1492 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. "
1493 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
FileNotFoundError: Couldn't find a dataset script at /content/tsv/tsv.py or any data file in the same directory. Couldn't find 'tsv' on the Hugging Face Hub either: FileNotFoundError: Dataset 'tsv' doesn't exist on the Hub
```
### Steps to reproduce the bug
```
data_files = {
"train": "/content/PUBHEALTH/train.tsv",
"validation": "/content/PUBHEALTH/dev.tsv",
"test": "/content/PUBHEALTH/test.tsv",
}
tsv_datasets_reloaded = load_dataset("tsv", data_files=data_files)
tsv_datasets_reloaded
```
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
<ipython-input-48-6a7b3e847019> in <cell line: 7>()
5 }
6
----> 7 csv_datasets_reloaded = load_dataset("tsv", data_files=data_files)
8 csv_datasets_reloaded
2 frames
/usr/local/lib/python3.10/dist-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1489 raise e1 from None
1490 if isinstance(e1, FileNotFoundError):
-> 1491 raise FileNotFoundError(
1492 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. "
1493 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
FileNotFoundError: Couldn't find a dataset script at /content/tsv/tsv.py or any data file in the same directory. Couldn't find 'tsv' on the Hugging Face Hub either: FileNotFoundError: Dataset 'tsv' doesn't exist on the Hub
```
### Expected behavior
load the data, push to hub
### Environment info
jupyter notebook RTX 3090 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6187/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6187/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6186 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6186/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6186/comments | https://api.github.com/repos/huggingface/datasets/issues/6186/events | https://github.com/huggingface/datasets/issues/6186 | 1,869,431,457 | I_kwDODunzps5vbUKh | 6,186 | Feature request: add code example of multi-GPU processing | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
},
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 2 | 2023-08-28T10:00:59 | 2023-08-30T13:34:14 | null | CONTRIBUTOR | null | ### Feature request
Would be great to add a code example of how to do multi-GPU processing with 🤗 Datasets in the documentation. cc @stevhliu
Currently the docs has a small [section](https://huggingface.co/docs/datasets/v2.3.2/en/process#map) on this saying "your big GPU call goes here", however it didn't work for me out-of-the-box.
Let's say you have a PyTorch model that can do translation, and you have multiple GPUs. In that case, you'd like to duplicate the model on each GPU, each processing (translating) a chunk of the data in parallel.
Here's how I tried to do that:
```
from datasets import load_dataset
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
from multiprocess import set_start_method
import torch
import os
dataset = load_dataset("mlfoundations/datacomp_small")
tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-200-distilled-600M")
model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-600M")
# put model on each available GPU
# also, should I do it like this or use nn.DataParallel?
model.to("cuda:0")
model.to("cuda:1")
set_start_method("spawn")
def translate_captions(batch, rank):
os.environ["CUDA_VISIBLE_DEVICES"] = str(rank % torch.cuda.device_count())
texts = batch["text"]
inputs = tokenizer(texts, padding=True, truncation=True, return_tensors="pt").to(model.device)
translated_tokens = model.generate(
**inputs, forced_bos_token_id=tokenizer.lang_code_to_id["eng_Latn"], max_length=30
)
translated_texts = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)
batch["translated_text"] = translated_texts
return batch
updated_dataset = dataset.map(translate_captions, with_rank=True, num_proc=2, batched=True, batch_size=256)
```
I've personally tried running this script on a machine with 2 A100 GPUs.
## Error 1
Running the code snippet above from the terminal (python script.py) resulted in the following error:
```
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/spawn.py", line 125, in _main
prepare(preparation_data)
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/spawn.py", line 236, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/spawn.py", line 287, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/runpy.py", line 289, in run_path
return _run_module_code(code, init_globals, run_name,
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/runpy.py", line 96, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/niels/python_projects/datacomp/datasets_multi_gpu.py", line 16, in <module>
set_start_method("spawn")
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/context.py", line 247, in set_start_method
raise RuntimeError('context has already been set')
RuntimeError: context has already been set
```
## Error 2
Then, based on [this Stackoverflow answer](https://stackoverflow.com/a/71616344/7762882), I put the `set_start_method("spawn")` section in a try: catch block. This resulted in the following error:
```
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/datasets/dataset_dict.py", line 817, in <dictcomp>
k: dataset.map(
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2926, in map
with Pool(nb_of_missing_shards, initargs=initargs, initializer=initializer) as pool:
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/context.py", line 119, in Pool
return Pool(processes, initializer, initargs, maxtasksperchild,
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/pool.py", line 215, in __init__
self._repopulate_pool()
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/pool.py", line 306, in _repopulate_pool
return self._repopulate_pool_static(self._ctx, self.Process,
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/pool.py", line 329, in _repopulate_pool_static
w.start()
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/process.py", line 121, in start
self._popen = self._Popen(self)
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/context.py", line 288, in _Popen
return Popen(process_obj)
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/popen_spawn_posix.py", line 42, in _launch
prep_data = spawn.get_preparation_data(process_obj._name)
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/spawn.py", line 154, in get_preparation_data
_check_not_importing_main()
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/spawn.py", line 134, in _check_not_importing_main
raise RuntimeError('''
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
```
So then I put the last line under a `if __name__ == '__main__':` block. Then the code snippet seemed to work, but it seemed that it's only leveraging a single GPU (based on monitoring `nvidia-smi`):
```
Mon Aug 28 12:19:24 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 515.65.01 Driver Version: 515.65.01 CUDA Version: 11.7 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA A100-SXM... On | 00000000:01:00.0 Off | 0 |
| N/A 55C P0 76W / 275W | 8747MiB / 81920MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 1 NVIDIA A100-SXM... On | 00000000:47:00.0 Off | 0 |
| N/A 67C P0 274W / 275W | 59835MiB / 81920MiB | 100% Default |
| | | Disabled |
```
Both GPUs should have equal GPU usage, but I've always noticed that the last GPU has way more usage than the other ones. This made me think that `os.environ["CUDA_VISIBLE_DEVICES"] = str(rank % torch.cuda.device_count())` might not work inside a Python script, especially if done after importing PyTorch?
### Motivation
Would be great to clarify how to do multi-GPU data processing.
### Your contribution
If my code snippet can be fixed, I can contribute it to the docs :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6186/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6186/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6185 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6185/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6185/comments | https://api.github.com/repos/huggingface/datasets/issues/6185/events | https://github.com/huggingface/datasets/issues/6185 | 1,868,077,748 | I_kwDODunzps5vWJq0 | 6,185 | Error in saving the PIL image into *.arrow files using datasets.arrow_writer | {
"login": "HaozheZhao",
"id": 14247682,
"node_id": "MDQ6VXNlcjE0MjQ3Njgy",
"avatar_url": "https://avatars.githubusercontent.com/u/14247682?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HaozheZhao",
"html_url": "https://github.com/HaozheZhao",
"followers_url": "https://api.github.com/users/HaozheZhao/followers",
"following_url": "https://api.github.com/users/HaozheZhao/following{/other_user}",
"gists_url": "https://api.github.com/users/HaozheZhao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HaozheZhao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HaozheZhao/subscriptions",
"organizations_url": "https://api.github.com/users/HaozheZhao/orgs",
"repos_url": "https://api.github.com/users/HaozheZhao/repos",
"events_url": "https://api.github.com/users/HaozheZhao/events{/privacy}",
"received_events_url": "https://api.github.com/users/HaozheZhao/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2023-08-26T12:15:57 | 2023-08-29T14:49:58 | null | NONE | null | ### Describe the bug
I am using the ArrowWriter from datasets.arrow_writer to save a json-style file as arrow files. Within the dictionary, it contains a feature called "image" which is a list of PIL.Image objects.
I am saving the json using the following script:
```
def save_to_arrow(path,temp):
with ArrowWriter(path=path,writer_batch_size=20) as writer:
writer.write_batch(temp)
writer.finalize()
```
However, when I attempt to restore the dataset and use the ```Dataset.from_file(path)``` function to load the arrow file, there seems to be an issue with the PIL.Image object in the dataset. The list of PIL.Images appears as follows rather than a normal PIL.Image object:
![1693051705440](https://github.com/huggingface/datasets/assets/14247682/03b204c2-d0fa-4d19-beff-6f4d7b83c848)
### Steps to reproduce the bug
1. Storing the data json into arrow files:
```
def save_to_arrow(path,temp):
with ArrowWriter(path=path,writer_batch_size=20) as writer:
writer.write_batch(temp)
writer.finalize()
save_to_arrow( path, json_file )
```
2. try to load the arrow file into the Dataset object using the ```Dataset.from_file(path)```
### Expected behavior
Except to saving the contained "image" feature as a list PIL.Image objects as the arrow file. And I can restore the dataset from the file.
### Environment info
- `datasets` version: 2.12.0
- Platform: Linux-5.4.0-150-generic-x86_64-with-glibc2.17
- Python version: 3.8.17
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 1.4.4 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6185/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6185/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6184 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6184/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6184/comments | https://api.github.com/repos/huggingface/datasets/issues/6184/events | https://github.com/huggingface/datasets/issues/6184 | 1,867,766,143 | I_kwDODunzps5vU9l_ | 6,184 | Map cache does not detect function changes in another module | {
"login": "jonathanasdf",
"id": 511073,
"node_id": "MDQ6VXNlcjUxMTA3Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jonathanasdf",
"html_url": "https://github.com/jonathanasdf",
"followers_url": "https://api.github.com/users/jonathanasdf/followers",
"following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}",
"gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions",
"organizations_url": "https://api.github.com/users/jonathanasdf/orgs",
"repos_url": "https://api.github.com/users/jonathanasdf/repos",
"events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}",
"received_events_url": "https://api.github.com/users/jonathanasdf/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
}
] | closed | false | null | [] | null | 2 | 2023-08-25T22:59:14 | 2023-08-29T20:57:07 | 2023-08-29T20:56:49 | NONE | null | ```python
# dataset.py
import os
import datasets
if not os.path.exists('/tmp/test.json'):
with open('/tmp/test.json', 'w') as file:
file.write('[{"text": "hello"}]')
def transform(example):
text = example['text']
# text += ' world'
return {'text': text}
data = datasets.load_dataset('json', data_files=['/tmp/test.json'], split='train')
data = data.map(transform)
```
```python
# test.py
import dataset
print(next(iter(dataset.data)))
```
Initialize cache
```
python3 test.py
# {'text': 'hello'}
```
Edit dataset.py and uncomment the commented line, run again
```
python3 test.py
# {'text': 'hello'}
# expected: {'text': 'hello world'}
```
Clear cache and run again
```
rm -rf ~/.cache/huggingface/datasets/*
python3 test.py
# {'text': 'hello world'}
```
If instead the two files are combined, then changes to the function are detected correctly. But it's expected when working on any realistic codebase that things will be modularized into separate files. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6184/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6184/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6183 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6183/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6183/comments | https://api.github.com/repos/huggingface/datasets/issues/6183/events | https://github.com/huggingface/datasets/issues/6183 | 1,867,743,276 | I_kwDODunzps5vU4As | 6,183 | Load dataset with non-existent file | {
"login": "freQuensy23-coder",
"id": 64750224,
"node_id": "MDQ6VXNlcjY0NzUwMjI0",
"avatar_url": "https://avatars.githubusercontent.com/u/64750224?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/freQuensy23-coder",
"html_url": "https://github.com/freQuensy23-coder",
"followers_url": "https://api.github.com/users/freQuensy23-coder/followers",
"following_url": "https://api.github.com/users/freQuensy23-coder/following{/other_user}",
"gists_url": "https://api.github.com/users/freQuensy23-coder/gists{/gist_id}",
"starred_url": "https://api.github.com/users/freQuensy23-coder/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/freQuensy23-coder/subscriptions",
"organizations_url": "https://api.github.com/users/freQuensy23-coder/orgs",
"repos_url": "https://api.github.com/users/freQuensy23-coder/repos",
"events_url": "https://api.github.com/users/freQuensy23-coder/events{/privacy}",
"received_events_url": "https://api.github.com/users/freQuensy23-coder/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-08-25T22:21:22 | 2023-08-29T13:26:22 | 2023-08-29T13:26:22 | NONE | null | ### Describe the bug
When load a dataset from datasets and pass a wrong path to json with the data, error message does not contain something abount "wrong path" or "file do not exist" -
```SchemaInferenceError: Please pass `features` or at least one example when writing data```
### Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset('json', data_files='/home/alexey/unreal_file.json')
```
### Expected behavior
Raise os FileNotFound error or custom error with informative message
### Environment info
```
# packages in environment at /home/alexey/.conda/envs/alex_LoRA:
#
# Name Version Build Channel
_libgcc_mutex 0.1 main
_openmp_mutex 5.1 1_gnu
accelerate 0.21.0 pypi_0 pypi
aiohttp 3.8.5 pypi_0 pypi
aiosignal 1.3.1 pypi_0 pypi
antlr4-python3-runtime 4.9.3 pypi_0 pypi
appdirs 1.4.4 pypi_0 pypi
asttokens 2.0.5 pyhd3eb1b0_0
async-timeout 4.0.3 pypi_0 pypi
attrs 23.1.0 pypi_0 pypi
backcall 0.2.0 pyhd3eb1b0_0
bitsandbytes 0.41.1 pypi_0 pypi
bzip2 1.0.8 h7b6447c_0
ca-certificates 2023.05.30 h06a4308_0
certifi 2023.7.22 pypi_0 pypi
charset-normalizer 3.2.0 pypi_0 pypi
click 8.1.6 pypi_0 pypi
cmake 3.27.2 pypi_0 pypi
comm 0.1.2 py310h06a4308_0
contourpy 1.1.0 pypi_0 pypi
cycler 0.11.0 pypi_0 pypi
datasets 2.14.4 pypi_0 pypi
debugpy 1.6.7 py310h6a678d5_0
decorator 5.1.1 pyhd3eb1b0_0
dill 0.3.7 pypi_0 pypi
docker-pycreds 0.4.0 pypi_0 pypi
executing 0.8.3 pyhd3eb1b0_0
filelock 3.12.2 pypi_0 pypi
fire 0.5.0 pypi_0 pypi
fonttools 4.42.0 pypi_0 pypi
frozenlist 1.4.0 pypi_0 pypi
fsspec 2023.6.0 pypi_0 pypi
gitdb 4.0.10 pypi_0 pypi
gitpython 3.1.32 pypi_0 pypi
huggingface-hub 0.16.4 pypi_0 pypi
idna 3.4 pypi_0 pypi
ipykernel 6.25.0 py310h2f386ee_0
ipython 8.12.2 py310h06a4308_0
ipython-genutils 0.2.0 pypi_0 pypi
ipywidgets 8.0.4 py310h06a4308_0
jedi 0.18.1 py310h06a4308_1
jinja2 3.1.2 pypi_0 pypi
jsonschema 4.19.0 pypi_0 pypi
jsonschema-specifications 2023.7.1 pypi_0 pypi
jupyter_client 8.1.0 py310h06a4308_0
jupyter_core 5.3.0 py310h06a4308_0
jupyterlab_widgets 3.0.5 py310h06a4308_0
kiwisolver 1.4.4 pypi_0 pypi
ld_impl_linux-64 2.38 h1181459_1
libffi 3.3 he6710b0_2
libgcc-ng 11.2.0 h1234567_1
libgomp 11.2.0 h1234567_1
libsodium 1.0.18 h7b6447c_0
libstdcxx-ng 11.2.0 h1234567_1
libuuid 1.41.5 h5eee18b_0
lightning-utilities 0.9.0 pypi_0 pypi
lit 16.0.6 pypi_0 pypi
markupsafe 2.1.3 pypi_0 pypi
matplotlib 3.7.2 pypi_0 pypi
matplotlib-inline 0.1.6 py310h06a4308_0
mpmath 1.3.0 pypi_0 pypi
multidict 6.0.4 pypi_0 pypi
multiprocess 0.70.15 pypi_0 pypi
nbformat 4.2.0 pypi_0 pypi
ncurses 6.4 h6a678d5_0
nest-asyncio 1.5.6 py310h06a4308_0
networkx 3.1 pypi_0 pypi
numpy 1.25.2 pypi_0 pypi
nvidia-cublas-cu11 11.10.3.66 pypi_0 pypi
nvidia-cuda-cupti-cu11 11.7.101 pypi_0 pypi
nvidia-cuda-nvrtc-cu11 11.7.99 pypi_0 pypi
nvidia-cuda-runtime-cu11 11.7.99 pypi_0 pypi
nvidia-cudnn-cu11 8.5.0.96 pypi_0 pypi
nvidia-cufft-cu11 10.9.0.58 pypi_0 pypi
nvidia-curand-cu11 10.2.10.91 pypi_0 pypi
nvidia-cusolver-cu11 11.4.0.1 pypi_0 pypi
nvidia-cusparse-cu11 11.7.4.91 pypi_0 pypi
nvidia-nccl-cu11 2.14.3 pypi_0 pypi
nvidia-nvtx-cu11 11.7.91 pypi_0 pypi
omegaconf 2.3.0 pypi_0 pypi
openssl 1.1.1v h7f8727e_0
packaging 23.0 py310h06a4308_0
pandas 2.0.3 pypi_0 pypi
parso 0.8.3 pyhd3eb1b0_0
pathtools 0.1.2 pypi_0 pypi
peft 0.4.0 pypi_0 pypi
pexpect 4.8.0 pyhd3eb1b0_3
pickleshare 0.7.5 pyhd3eb1b0_1003
pillow 10.0.0 pypi_0 pypi
pip 23.2.1 py310h06a4308_0
platformdirs 2.5.2 py310h06a4308_0
plotly 5.16.1 pypi_0 pypi
prompt-toolkit 3.0.36 py310h06a4308_0
protobuf 4.24.0 pypi_0 pypi
psutil 5.9.0 py310h5eee18b_0
ptyprocess 0.7.0 pyhd3eb1b0_2
pure_eval 0.2.2 pyhd3eb1b0_0
pyarrow 12.0.1 pypi_0 pypi
pygments 2.15.1 py310h06a4308_1
pyparsing 3.0.9 pypi_0 pypi
python 3.10.0 h12debd9_5
python-dateutil 2.8.2 pyhd3eb1b0_0
pytorch-lightning 2.0.6 pypi_0 pypi
pytz 2023.3 pypi_0 pypi
pyyaml 6.0.1 pypi_0 pypi
pyzmq 25.1.0 py310h6a678d5_0
readline 8.2 h5eee18b_0
referencing 0.30.2 pypi_0 pypi
regex 2023.8.8 pypi_0 pypi
requests 2.31.0 pypi_0 pypi
rpds-py 0.9.2 pypi_0 pypi
safetensors 0.3.2 pypi_0 pypi
scipy 1.11.1 pypi_0 pypi
sentencepiece 0.1.99 pypi_0 pypi
sentry-sdk 1.29.2 pypi_0 pypi
setproctitle 1.3.2 pypi_0 pypi
setuptools 68.0.0 py310h06a4308_0
six 1.16.0 pyhd3eb1b0_1
smmap 5.0.0 pypi_0 pypi
sqlite 3.41.2 h5eee18b_0
stack_data 0.2.0 pyhd3eb1b0_0
sympy 1.12 pypi_0 pypi
tenacity 8.2.3 pypi_0 pypi
termcolor 2.3.0 pypi_0 pypi
tk 8.6.12 h1ccaba5_0
tokenizers 0.13.3 pypi_0 pypi
torch 2.0.1 pypi_0 pypi
torchmetrics 1.0.3 pypi_0 pypi
tornado 6.3.2 py310h5eee18b_0
tqdm 4.66.1 pypi_0 pypi
traitlets 5.7.1 py310h06a4308_0
transformers 4.31.0 pypi_0 pypi
triton 2.0.0 pypi_0 pypi
typing-extensions 4.7.1 pypi_0 pypi
tzdata 2023.3 pypi_0 pypi
urllib3 2.0.4 pypi_0 pypi
wandb 0.15.8 pypi_0 pypi
wcwidth 0.2.5 pyhd3eb1b0_0
wheel 0.38.4 py310h06a4308_0
widgetsnbextension 4.0.5 py310h06a4308_0
xxhash 3.3.0 pypi_0 pypi
xz 5.4.2 h5eee18b_0
yarl 1.9.2 pypi_0 pypi
zeromq 4.3.4 h2531618_0
zlib 1.2.13 h5eee18b_0
active environment : None
user config file : /home/alexey/.condarc
populated config files :
conda version : 23.1.0
conda-build version : 3.22.0
python version : 3.9.13.final.0
virtual packages : __archspec=1=x86_64
__cuda=12.0=0
__glibc=2.35=0
__linux=5.19.0=0
__unix=0=0
base environment : /opt/anaconda/anaconda3 (read only)
conda av data dir : /opt/anaconda/anaconda3/etc/conda
conda av metadata url : None
channel URLs : https://repo.anaconda.com/pkgs/main/linux-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/r/linux-64
https://repo.anaconda.com/pkgs/r/noarch
package cache : /opt/anaconda/anaconda3/pkgs
/home/alexey/.conda/pkgs
envs directories : /home/alexey/.conda/envs
/opt/anaconda/anaconda3/envs
platform : linux-64
user-agent : conda/23.1.0 requests/2.31.0 CPython/3.9.13 Linux/5.19.0-46-generic ubuntu/22.04.2 glibc/2.35
UID:GID : 1009:1009
netrc file : /home/alexey/.netrc
offline mode : False
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6183/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 1,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6183/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6182 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6182/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6182/comments | https://api.github.com/repos/huggingface/datasets/issues/6182/events | https://github.com/huggingface/datasets/issues/6182 | 1,867,203,131 | I_kwDODunzps5vS0I7 | 6,182 | Loading Meteor metric in HF evaluate module crashes due to datasets import issue | {
"login": "dsashulya",
"id": 42322648,
"node_id": "MDQ6VXNlcjQyMzIyNjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/42322648?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dsashulya",
"html_url": "https://github.com/dsashulya",
"followers_url": "https://api.github.com/users/dsashulya/followers",
"following_url": "https://api.github.com/users/dsashulya/following{/other_user}",
"gists_url": "https://api.github.com/users/dsashulya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dsashulya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dsashulya/subscriptions",
"organizations_url": "https://api.github.com/users/dsashulya/orgs",
"repos_url": "https://api.github.com/users/dsashulya/repos",
"events_url": "https://api.github.com/users/dsashulya/events{/privacy}",
"received_events_url": "https://api.github.com/users/dsashulya/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-08-25T14:54:06 | 2023-09-01T18:51:12 | 2023-08-31T14:38:23 | NONE | null | ### Describe the bug
When using python3.9 and ```evaluate``` module loading Meteor metric crashes at a non-existent import from ```datasets.config``` in ```datasets v2.14```
### Steps to reproduce the bug
```
from evaluate import load
meteor = load("meteor")
```
produces the following error:
```
from datasets.config import importlib_metadata, version
ImportError: cannot import name 'importlib_metadata' from 'datasets.config' (<path_to_project>/venv/lib/python3.9/site-packages/datasets/config.py)
```
### Expected behavior
```datasets``` of v2.10 has the following workaround in ```config.py```:
```
if PY_VERSION < version.parse("3.8"):
import importlib_metadata
else:
import importlib.metadata as importlib_metadata
```
However, it's absent in v2.14 which might be the cause of the issue.
### Environment info
- `datasets` version: 2.14.4
- Platform: macOS-13.5-arm64-arm-64bit
- Python version: 3.9.6
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 2.0.3
- Evaluate version: 0.4.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6182/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6182/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6181 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6181/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6181/comments | https://api.github.com/repos/huggingface/datasets/issues/6181/events | https://github.com/huggingface/datasets/pull/6181 | 1,867,035,522 | PR_kwDODunzps5Yy2VO | 6,181 | Fix import in `image_load` doc | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-08-25T13:12:19 | 2023-08-25T16:12:46 | 2023-08-25T16:02:24 | CONTRIBUTOR | null | Reported on [Discord](https://discord.com/channels/879548962464493619/1144295822209581168/1144295822209581168) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6181/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6181/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6181",
"html_url": "https://github.com/huggingface/datasets/pull/6181",
"diff_url": "https://github.com/huggingface/datasets/pull/6181.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6181.patch",
"merged_at": "2023-08-25T16:02:24"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6180 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6180/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6180/comments | https://api.github.com/repos/huggingface/datasets/issues/6180/events | https://github.com/huggingface/datasets/pull/6180 | 1,867,032,578 | PR_kwDODunzps5Yy1r- | 6,180 | Use `hf-internal-testing` repos for hosting test dataset repos | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-08-25T13:10:26 | 2023-08-25T16:58:02 | 2023-08-25T16:46:22 | CONTRIBUTOR | null | Use `hf-internal-testing` for hosting instead of the maintainers' dataset repos. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6180/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6180/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6180",
"html_url": "https://github.com/huggingface/datasets/pull/6180",
"diff_url": "https://github.com/huggingface/datasets/pull/6180.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6180.patch",
"merged_at": "2023-08-25T16:46:22"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6179 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6179/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6179/comments | https://api.github.com/repos/huggingface/datasets/issues/6179/events | https://github.com/huggingface/datasets/issues/6179 | 1,867,009,016 | I_kwDODunzps5vSEv4 | 6,179 | Map cache with tokenizer | {
"login": "jonathanasdf",
"id": 511073,
"node_id": "MDQ6VXNlcjUxMTA3Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jonathanasdf",
"html_url": "https://github.com/jonathanasdf",
"followers_url": "https://api.github.com/users/jonathanasdf/followers",
"following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}",
"gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions",
"organizations_url": "https://api.github.com/users/jonathanasdf/orgs",
"repos_url": "https://api.github.com/users/jonathanasdf/repos",
"events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}",
"received_events_url": "https://api.github.com/users/jonathanasdf/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 4 | 2023-08-25T12:55:18 | 2023-08-31T15:17:24 | null | NONE | null | Similar issue to https://github.com/huggingface/datasets/issues/5985, but across different sessions rather than two calls in the same session.
Unlike that issue, explicitly calling tokenizer(my_args) before the map() doesn't help, because the tokenizer was created with a different hash to begin with...
setup
```
from transformers import AutoTokenizer
AutoTokenizer.from_pretrained('bert-base-uncased').save_pretrained("tok")
```
this prints different value each time
```
from transformers import AutoTokenizer
from datasets.utils.py_utils import dumps # Huggingface datasets
print(hash(dumps(AutoTokenizer.from_pretrained("tok"))))
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6179/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6179/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6178 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6178/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6178/comments | https://api.github.com/repos/huggingface/datasets/issues/6178/events | https://github.com/huggingface/datasets/issues/6178 | 1,866,610,102 | I_kwDODunzps5vQjW2 | 6,178 | 'import datasets' throws "invalid syntax error" | {
"login": "elia-ashraf",
"id": 128580829,
"node_id": "U_kgDOB6n83Q",
"avatar_url": "https://avatars.githubusercontent.com/u/128580829?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elia-ashraf",
"html_url": "https://github.com/elia-ashraf",
"followers_url": "https://api.github.com/users/elia-ashraf/followers",
"following_url": "https://api.github.com/users/elia-ashraf/following{/other_user}",
"gists_url": "https://api.github.com/users/elia-ashraf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elia-ashraf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elia-ashraf/subscriptions",
"organizations_url": "https://api.github.com/users/elia-ashraf/orgs",
"repos_url": "https://api.github.com/users/elia-ashraf/repos",
"events_url": "https://api.github.com/users/elia-ashraf/events{/privacy}",
"received_events_url": "https://api.github.com/users/elia-ashraf/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2023-08-25T08:35:14 | 2023-08-29T14:57:17 | null | NONE | null | ### Describe the bug
Hi,
I have been trying to import the datasets library but I keep gtting this error.
`Traceback (most recent call last):
File /opt/local/jupyterhub/lib64/python3.9/site-packages/IPython/core/interactiveshell.py:3508 in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
Cell In[2], line 1
import datasets
File /opt/local/jupyterhub/lib64/python3.9/site-packages/datasets/__init__.py:22
from .arrow_dataset import Dataset
File /opt/local/jupyterhub/lib64/python3.9/site-packages/datasets/arrow_dataset.py:67
from .arrow_writer import ArrowWriter, OptimizedTypedSequence
File /opt/local/jupyterhub/lib64/python3.9/site-packages/datasets/arrow_writer.py:27
from .features import Features, Image, Value
File /opt/local/jupyterhub/lib64/python3.9/site-packages/datasets/features/__init__.py:17
from .audio import Audio
File /opt/local/jupyterhub/lib64/python3.9/site-packages/datasets/features/audio.py:11
from ..download.streaming_download_manager import xopen, xsplitext
File /opt/local/jupyterhub/lib64/python3.9/site-packages/datasets/download/__init__.py:10
from .streaming_download_manager import StreamingDownloadManager
File /opt/local/jupyterhub/lib64/python3.9/site-packages/datasets/download/streaming_download_manager.py:18
from aiohttp.client_exceptions import ClientError
File /opt/local/jupyterhub/lib64/python3.9/site-packages/aiohttp/__init__.py:7
from .connector import * # noqa
File /opt/local/jupyterhub/lib64/python3.9/site-packages/aiohttp/connector.py:12
from .client import ClientRequest
File /opt/local/jupyterhub/lib64/python3.9/site-packages/aiohttp/client.py:144
yield from asyncio.async(resp.release(), loop=loop)
^
SyntaxError: invalid syntax`
I have simply used these commands:
`import datasets`
and
`from datasets import load_dataset`
### Environment info
The library has been installed a virtual machine on JupyterHub. Although I have used this library multiple times (on the same VM) before, to train/test an ASR or other ML models, I had never encountered this error. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6178/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6178/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6177 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6177/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6177/comments | https://api.github.com/repos/huggingface/datasets/issues/6177/events | https://github.com/huggingface/datasets/pull/6177 | 1,865,490,962 | PR_kwDODunzps5Ytky- | 6,177 | Use object detection images from `huggingface/documentation-images` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-08-24T16:16:09 | 2023-08-25T16:30:00 | 2023-08-25T16:21:17 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6177/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6177/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6177",
"html_url": "https://github.com/huggingface/datasets/pull/6177",
"diff_url": "https://github.com/huggingface/datasets/pull/6177.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6177.patch",
"merged_at": "2023-08-25T16:21:17"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6176 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6176/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6176/comments | https://api.github.com/repos/huggingface/datasets/issues/6176/events | https://github.com/huggingface/datasets/issues/6176 | 1,864,436,408 | I_kwDODunzps5vIQq4 | 6,176 | how to limit the size of memory mapped file? | {
"login": "williamium3000",
"id": 47763855,
"node_id": "MDQ6VXNlcjQ3NzYzODU1",
"avatar_url": "https://avatars.githubusercontent.com/u/47763855?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/williamium3000",
"html_url": "https://github.com/williamium3000",
"followers_url": "https://api.github.com/users/williamium3000/followers",
"following_url": "https://api.github.com/users/williamium3000/following{/other_user}",
"gists_url": "https://api.github.com/users/williamium3000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/williamium3000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/williamium3000/subscriptions",
"organizations_url": "https://api.github.com/users/williamium3000/orgs",
"repos_url": "https://api.github.com/users/williamium3000/repos",
"events_url": "https://api.github.com/users/williamium3000/events{/privacy}",
"received_events_url": "https://api.github.com/users/williamium3000/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 4 | 2023-08-24T05:33:45 | 2023-08-26T05:09:56 | null | NONE | null | ### Describe the bug
Huggingface datasets use memory-mapped file to map large datasets in memory for fast access.
However, it seems like huggingface will occupy all the memory for memory-mapped files, which makes a troublesome situation since we cluster will distribute a small portion of memory to me (once it's over the limit, memory cannot be allocated), however, when the dataset checks the total memory, all of the memory will be taken into account which makes huggingface dataset try to allocate more memory than allowed.
So is there a way to explicitly limit the size of memory mapped file?
### Steps to reproduce the bug
python
>>> from datasets import load_dataset
>>> dataset = load_dataset("c4", "en", streaming=True)
### Expected behavior
In a normal environment, this will not have any problem.
However, when the system allocates a portion of the memory to the program and when the dataset checks the total memory, all of the memory will be taken into account which makes huggingface dataset try to allocate more memory than allowed.
### Environment info
linux cluster with SGE(Sun Grid Engine) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6176/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6176/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6175 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6175/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6175/comments | https://api.github.com/repos/huggingface/datasets/issues/6175/events | https://github.com/huggingface/datasets/pull/6175 | 1,863,592,678 | PR_kwDODunzps5YnKlx | 6,175 | PyArrow 13 CI fixes | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-08-23T15:45:53 | 2023-08-25T13:15:59 | 2023-08-25T13:06:52 | CONTRIBUTOR | null | Fixes:
* bumps the PyArrow version check in the `cast_array_to_feature` to avoid the offset bug (still not fixed)
* aligns the Pandas formatting tests with the Numpy ones (the current test fails due to https://github.com/apache/arrow/pull/35656, which requires `.to_pandas(coerce_temporal_nanoseconds=True)` to always return `datetime [ns]` objects)
Fix #6173
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6175/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6175/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6175",
"html_url": "https://github.com/huggingface/datasets/pull/6175",
"diff_url": "https://github.com/huggingface/datasets/pull/6175.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6175.patch",
"merged_at": "2023-08-25T13:06:52"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6173 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6173/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6173/comments | https://api.github.com/repos/huggingface/datasets/issues/6173/events | https://github.com/huggingface/datasets/issues/6173 | 1,863,422,065 | I_kwDODunzps5vEZBx | 6,173 | Fix CI for pyarrow 13.0.0 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-08-23T14:11:20 | 2023-08-25T13:06:53 | 2023-08-25T13:06:53 | MEMBER | null | pyarrow 13.0.0 just came out
```
FAILED tests/test_formatting.py::ArrowExtractorTest::test_pandas_extractor - AssertionError: Attributes of Series are different
Attribute "dtype" are different
[left]: datetime64[us, UTC]
[right]: datetime64[ns, UTC]
```
```
FAILED tests/test_table.py::test_cast_sliced_fixed_size_array_to_features - TypeError: Couldn't cast array of type
fixed_size_list<item: int32>[3]
to
Sequence(feature=Value(dtype='int64', id=None), length=3, id=None)
```
e.g. in https://github.com/huggingface/datasets/actions/runs/5952253963/job/16143847230
first error may be related to https://github.com/apache/arrow/issues/33321
second one maybe because `feature.length * len(array) == len(array_values)` is not satisfied anymore somehow ? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6173/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/6173/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6172 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6172/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6172/comments | https://api.github.com/repos/huggingface/datasets/issues/6172/events | https://github.com/huggingface/datasets/issues/6172 | 1,863,318,027 | I_kwDODunzps5vD_oL | 6,172 | Make Dataset streaming queries retryable | {
"login": "rojagtap",
"id": 42299342,
"node_id": "MDQ6VXNlcjQyMjk5MzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/42299342?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rojagtap",
"html_url": "https://github.com/rojagtap",
"followers_url": "https://api.github.com/users/rojagtap/followers",
"following_url": "https://api.github.com/users/rojagtap/following{/other_user}",
"gists_url": "https://api.github.com/users/rojagtap/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rojagtap/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rojagtap/subscriptions",
"organizations_url": "https://api.github.com/users/rojagtap/orgs",
"repos_url": "https://api.github.com/users/rojagtap/repos",
"events_url": "https://api.github.com/users/rojagtap/events{/privacy}",
"received_events_url": "https://api.github.com/users/rojagtap/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 1 | 2023-08-23T13:15:38 | 2023-08-24T14:29:27 | null | NONE | null | ### Feature request
Streaming datasets, as intended, do not load the entire dataset in memory or disk. However, while querying the next data chunk from the remote, sometimes it is possible that the service is down or there might be other issues that may cause the query to fail. In such a scenario, it would be nice to make these queries retryable (perhaps with a backoff strategy).
### Motivation
I was working on a model and the model checkpoints after every 1000 steps. At step 1800 I got a 504 HTTP status code error from Huggingface hub for my pytorch `dataloader`. Given the size of my model and data, it took around 2 hours to reach 1800 steps and now it will take about an hour to recover the lost 800. It would be better to get a retryable querying strategy.
### Your contribution
It would be better if someone having experience in this area takes this up as this would require some testing. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6172/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6172/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6171 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6171/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6171/comments | https://api.github.com/repos/huggingface/datasets/issues/6171/events | https://github.com/huggingface/datasets/pull/6171 | 1,862,922,767 | PR_kwDODunzps5Yk4AS | 6,171 | Fix typo in about_mapstyle_vs_iterable.mdx | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-08-23T09:21:11 | 2023-08-23T09:32:59 | 2023-08-23T09:21:19 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6171/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6171/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6171",
"html_url": "https://github.com/huggingface/datasets/pull/6171",
"diff_url": "https://github.com/huggingface/datasets/pull/6171.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6171.patch",
"merged_at": "2023-08-23T09:21:19"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6170 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6170/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6170/comments | https://api.github.com/repos/huggingface/datasets/issues/6170/events | https://github.com/huggingface/datasets/pull/6170 | 1,862,705,731 | PR_kwDODunzps5YkJOV | 6,170 | feat: Return the name of the currently loaded file | {
"login": "Amitesh-Patel",
"id": 124021133,
"node_id": "U_kgDOB2RpjQ",
"avatar_url": "https://avatars.githubusercontent.com/u/124021133?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Amitesh-Patel",
"html_url": "https://github.com/Amitesh-Patel",
"followers_url": "https://api.github.com/users/Amitesh-Patel/followers",
"following_url": "https://api.github.com/users/Amitesh-Patel/following{/other_user}",
"gists_url": "https://api.github.com/users/Amitesh-Patel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Amitesh-Patel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Amitesh-Patel/subscriptions",
"organizations_url": "https://api.github.com/users/Amitesh-Patel/orgs",
"repos_url": "https://api.github.com/users/Amitesh-Patel/repos",
"events_url": "https://api.github.com/users/Amitesh-Patel/events{/privacy}",
"received_events_url": "https://api.github.com/users/Amitesh-Patel/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2023-08-23T07:08:17 | 2023-08-29T12:41:05 | null | NONE | null | Added an optional parameter return_file_name in the load_dataset function. When it is set to True, the function will include the name of the file corresponding to the current line as a feature in the returned output.
I added this here https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/json/json.py#L92.
fixes #5806 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6170/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6170/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6170",
"html_url": "https://github.com/huggingface/datasets/pull/6170",
"diff_url": "https://github.com/huggingface/datasets/pull/6170.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6170.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6169 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6169/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6169/comments | https://api.github.com/repos/huggingface/datasets/issues/6169/events | https://github.com/huggingface/datasets/issues/6169 | 1,862,360,199 | I_kwDODunzps5vAVyH | 6,169 | Configurations in yaml not working | {
"login": "tsor13",
"id": 45085098,
"node_id": "MDQ6VXNlcjQ1MDg1MDk4",
"avatar_url": "https://avatars.githubusercontent.com/u/45085098?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tsor13",
"html_url": "https://github.com/tsor13",
"followers_url": "https://api.github.com/users/tsor13/followers",
"following_url": "https://api.github.com/users/tsor13/following{/other_user}",
"gists_url": "https://api.github.com/users/tsor13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tsor13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tsor13/subscriptions",
"organizations_url": "https://api.github.com/users/tsor13/orgs",
"repos_url": "https://api.github.com/users/tsor13/repos",
"events_url": "https://api.github.com/users/tsor13/events{/privacy}",
"received_events_url": "https://api.github.com/users/tsor13/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 4 | 2023-08-23T00:13:22 | 2023-08-23T15:35:31 | null | NONE | null | ### Dataset configurations cannot be created in YAML/README
Hello! I'm trying to follow the docs here in order to create structure in my dataset as added from here (#5331): https://github.com/huggingface/datasets/blob/8b8e6ee067eb74e7965ca2a6768f15f9398cb7c8/docs/source/repository_structure.mdx#L110-L118
I have the exact example in my config file for [my data repo](https://huggingface.co/datasets/tsor13/test):
```
configs:
- config_name: main_data
data_files: "main_data.csv"
- config_name: additional_data
data_files: "additional_data.csv"
```
Yet, I'm unable to load different configurations:
```
from datasets import get_dataset_config_names
get_dataset_config_names('tsor13/test', use_auth_token=True)
```
returns a single split, `['tsor13--test']`
Does anyone have any insights?
@polinaeterna thank you for adding this feature, it is super useful. Do you happen to have any ideas?
### Steps to reproduce the bug
from datasets import get_dataset_config_names
get_dataset_config_names('tsor13/test')
### Expected behavior
I would expect there to be two splits, `main_data` and `additional_data`. However, only `['tsor13--test']` test is returned.
### Environment info
- `datasets` version: 2.14.4
- Platform: macOS-13.4-arm64-arm-64bit
- Python version: 3.11.4
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 1.5.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6169/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6169/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6168 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6168/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6168/comments | https://api.github.com/repos/huggingface/datasets/issues/6168/events | https://github.com/huggingface/datasets/pull/6168 | 1,861,867,274 | PR_kwDODunzps5YhT7Y | 6,168 | Fix ArrayXD YAML conversion | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 3 | 2023-08-22T17:02:54 | 2023-08-29T12:42:32 | null | CONTRIBUTOR | null | Replace the `shape` tuple with a list in the `ArrayXD` YAML conversion.
Fix #6112 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6168/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6168/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6168",
"html_url": "https://github.com/huggingface/datasets/pull/6168",
"diff_url": "https://github.com/huggingface/datasets/pull/6168.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6168.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6167 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6167/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6167/comments | https://api.github.com/repos/huggingface/datasets/issues/6167/events | https://github.com/huggingface/datasets/pull/6167 | 1,861,474,327 | PR_kwDODunzps5Yf9-t | 6,167 | Allow hyphen in split name | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 5 | 2023-08-22T13:30:59 | 2023-08-22T15:39:24 | 2023-08-22T15:38:53 | CONTRIBUTOR | null | To fix https://discuss.huggingface.co/t/error-when-setting-up-the-dataset-viewer-streamingrowserror/51276.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6167/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6167/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6167",
"html_url": "https://github.com/huggingface/datasets/pull/6167",
"diff_url": "https://github.com/huggingface/datasets/pull/6167.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6167.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6166 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6166/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6166/comments | https://api.github.com/repos/huggingface/datasets/issues/6166/events | https://github.com/huggingface/datasets/pull/6166 | 1,861,259,055 | PR_kwDODunzps5YfOt0 | 6,166 | Document BUILDER_CONFIG_CLASS | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-08-22T11:27:41 | 2023-08-23T14:01:25 | 2023-08-23T13:52:36 | MEMBER | null | Related to https://github.com/huggingface/datasets/issues/6130 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6166/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6166/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6166",
"html_url": "https://github.com/huggingface/datasets/pull/6166",
"diff_url": "https://github.com/huggingface/datasets/pull/6166.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6166.patch",
"merged_at": "2023-08-23T13:52:36"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6165 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6165/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6165/comments | https://api.github.com/repos/huggingface/datasets/issues/6165/events | https://github.com/huggingface/datasets/pull/6165 | 1,861,124,284 | PR_kwDODunzps5YexBL | 6,165 | Fix multiprocessing with spawn in iterable datasets | {
"login": "Hubert-Bonisseur",
"id": 48770768,
"node_id": "MDQ6VXNlcjQ4NzcwNzY4",
"avatar_url": "https://avatars.githubusercontent.com/u/48770768?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hubert-Bonisseur",
"html_url": "https://github.com/Hubert-Bonisseur",
"followers_url": "https://api.github.com/users/Hubert-Bonisseur/followers",
"following_url": "https://api.github.com/users/Hubert-Bonisseur/following{/other_user}",
"gists_url": "https://api.github.com/users/Hubert-Bonisseur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hubert-Bonisseur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hubert-Bonisseur/subscriptions",
"organizations_url": "https://api.github.com/users/Hubert-Bonisseur/orgs",
"repos_url": "https://api.github.com/users/Hubert-Bonisseur/repos",
"events_url": "https://api.github.com/users/Hubert-Bonisseur/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hubert-Bonisseur/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 5 | 2023-08-22T10:07:23 | 2023-08-29T13:27:14 | 2023-08-29T13:18:11 | CONTRIBUTOR | null | The "Spawn" method is preferred when multiprocessing on macOS or Windows systems, instead of the "Fork" method on linux systems.
This causes some methods of Iterable Datasets to break when using a dataloader with more than 0 workers.
I fixed the issue by replacing lambda and local methods which are not pickle-able.
See the example below:
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
if __name__ == "__main__":
dataset = load_dataset("lhoestq/demo1", split="train")
dataset = dataset.to_iterable_dataset(num_shards=3)
dataset = dataset.remove_columns(["package_name"])
dataset = dataset.rename_columns({
"review": "review1"
})
dataset = dataset.rename_column("date", "date1")
for sample in DataLoader(dataset, batch_size=None, num_workers=3):
print(sample)
```
To notice the fix on a linux system, adding these lines should do the trick:
```python
import multiprocessing
multiprocessing.set_start_method('spawn')
```
I also removed what looks like code duplication between rename_colums and rename_column
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6165/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6165/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6165",
"html_url": "https://github.com/huggingface/datasets/pull/6165",
"diff_url": "https://github.com/huggingface/datasets/pull/6165.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6165.patch",
"merged_at": "2023-08-29T13:18:11"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6164 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6164/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6164/comments | https://api.github.com/repos/huggingface/datasets/issues/6164/events | https://github.com/huggingface/datasets/pull/6164 | 1,859,560,007 | PR_kwDODunzps5YZZAJ | 6,164 | Fix: Missing a MetadataConfigs init when the repo has a `datasets_info.json` but no README | {
"login": "clefourrier",
"id": 22726840,
"node_id": "MDQ6VXNlcjIyNzI2ODQw",
"avatar_url": "https://avatars.githubusercontent.com/u/22726840?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clefourrier",
"html_url": "https://github.com/clefourrier",
"followers_url": "https://api.github.com/users/clefourrier/followers",
"following_url": "https://api.github.com/users/clefourrier/following{/other_user}",
"gists_url": "https://api.github.com/users/clefourrier/gists{/gist_id}",
"starred_url": "https://api.github.com/users/clefourrier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clefourrier/subscriptions",
"organizations_url": "https://api.github.com/users/clefourrier/orgs",
"repos_url": "https://api.github.com/users/clefourrier/repos",
"events_url": "https://api.github.com/users/clefourrier/events{/privacy}",
"received_events_url": "https://api.github.com/users/clefourrier/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-08-21T14:57:54 | 2023-08-21T16:27:05 | 2023-08-21T16:18:26 | CONTRIBUTOR | null | When I try to push to an arrow repo (can provide the link on Slack), it uploads the files but fails to update the metadata, with
```
File "app.py", line 123, in add_new_eval
eval_results[level].push_to_hub(my_repo, token=TOKEN, split=SPLIT)
File "blabla_my_env_path/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 5501, in push_to_hub
if not metadata_configs:
UnboundLocalError: local variable 'metadata_configs' referenced before assignment
```
This fixes it. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6164/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6164/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6164",
"html_url": "https://github.com/huggingface/datasets/pull/6164",
"diff_url": "https://github.com/huggingface/datasets/pull/6164.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6164.patch",
"merged_at": "2023-08-21T16:18:26"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6163 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6163/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6163/comments | https://api.github.com/repos/huggingface/datasets/issues/6163/events | https://github.com/huggingface/datasets/issues/6163 | 1,857,682,241 | I_kwDODunzps5uuftB | 6,163 | Error type: ArrowInvalid Details: Failed to parse string: '[254,254]' as a scalar of type int32 | {
"login": "shishirCTC",
"id": 90616801,
"node_id": "MDQ6VXNlcjkwNjE2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/90616801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shishirCTC",
"html_url": "https://github.com/shishirCTC",
"followers_url": "https://api.github.com/users/shishirCTC/followers",
"following_url": "https://api.github.com/users/shishirCTC/following{/other_user}",
"gists_url": "https://api.github.com/users/shishirCTC/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shishirCTC/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shishirCTC/subscriptions",
"organizations_url": "https://api.github.com/users/shishirCTC/orgs",
"repos_url": "https://api.github.com/users/shishirCTC/repos",
"events_url": "https://api.github.com/users/shishirCTC/events{/privacy}",
"received_events_url": "https://api.github.com/users/shishirCTC/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2023-08-19T11:34:40 | 2023-08-21T13:28:16 | null | NONE | null | ### Describe the bug
I am getting the following error while I am trying to upload the CSV sheet to train a model. My CSV sheet content is exactly same as shown in the example CSV file in the Auto Train page. Attaching screenshot of error for reference. I have also tried converting the index of the answer that are integer into string by placing inverted commas and also without inverted commas.
Can anyone please help me out?
FYI : I am using Chrome browser.
Error type: ArrowInvalid
Details: Failed to parse string: '[254,254]' as a scalar of type int32
![Screenshot 2023-08-19 165827](https://github.com/huggingface/datasets/assets/90616801/95fad96e-7dce-4bb5-9f83-9f1659a32891)
### Steps to reproduce the bug
Kindly let me know how to fix this?
### Expected behavior
Kindly let me know how to fix this?
### Environment info
Kindly let me know how to fix this? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6163/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6163/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6162 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6162/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6162/comments | https://api.github.com/repos/huggingface/datasets/issues/6162/events | https://github.com/huggingface/datasets/issues/6162 | 1,856,198,342 | I_kwDODunzps5uo1bG | 6,162 | load_dataset('json',...) from togethercomputer/RedPajama-Data-1T errors when jsonl rows contains different data fields | {
"login": "rbrugaro",
"id": 82971690,
"node_id": "MDQ6VXNlcjgyOTcxNjkw",
"avatar_url": "https://avatars.githubusercontent.com/u/82971690?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rbrugaro",
"html_url": "https://github.com/rbrugaro",
"followers_url": "https://api.github.com/users/rbrugaro/followers",
"following_url": "https://api.github.com/users/rbrugaro/following{/other_user}",
"gists_url": "https://api.github.com/users/rbrugaro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rbrugaro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rbrugaro/subscriptions",
"organizations_url": "https://api.github.com/users/rbrugaro/orgs",
"repos_url": "https://api.github.com/users/rbrugaro/repos",
"events_url": "https://api.github.com/users/rbrugaro/events{/privacy}",
"received_events_url": "https://api.github.com/users/rbrugaro/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 4 | 2023-08-18T07:19:39 | 2023-08-18T17:00:35 | null | NONE | null | ### Describe the bug
When loading some jsonl from redpajama-data-1T github source [togethercomputer/RedPajama-Data-1T](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) fails due to one row of the file containing an extra field called **symlink_target: string>**.
When deleting that line the loading is successful.
We also tried loading this file with the discrepancy using this function and it is successful
```python
os.environ["RED_PAJAMA_DATA_DIR"] ="/path_to_local_copy_of_RedPajama-Data-1T"
ds = load_dataset('togethercomputer/RedPajama-Data-1T', 'github',cache_dir="/path_to_folder_with_jsonl",streaming=True)['train']
```
### Steps to reproduce the bug
Steps to reproduce the behavior:
1. Load one jsonl from the redpajama-data-1T
```bash
wget https://data.together.xyz/redpajama-data-1T/v1.0.0/github/filtered_27f05c041a1c401783f90b9415e40e4b.sampled.jsonl
```
2.Load dataset will give error:
```python
from datasets import load_dataset
ds = load_dataset('json', data_files='/path_to/filtered_27f05c041a1c401783f90b9415e40e4b.sampled.jsonl')
```
_TypeError: Couldn't cast array of type
Struct
<content_hash: string,
timestamp: string,
source: string,
line_count: int64,
max_line_length: int64,
avg_line_length: double,
alnum_prop: double,
repo_name: string,
id: string,
size: string,
binary: bool,
copies: string,
ref: string,
path: string,
mode: string,
license: string,
language: list<item: struct<name: string, bytes: string>>, **symlink_target: string>**
to
{'content_hash': Value(dtype='string', id=None),
'timestamp': Value(dtype='string', id=None),
'source': Value(dtype='string', id=None),
'line_count': Value(dtype='int64', id=None),
'max_line_length': Value(dtype='int64', id=None),
'avg_line_length': Value(dtype='float64', id=None),
'alnum_prop': Value(dtype='float64', id=None),
'repo_name': Value(dtype='string', id=None),
'id': Value(dtype='string', id=None),
'size': Value(dtype='string', id=None),
'binary': Value(dtype='bool', id=None),
'copies': Value(dtype='string', id=None),
'ref': Value(dtype='string', id=None),
'path': Value(dtype='string', id=None),
'mode': Value(dtype='string', id=None),
'license': Value(dtype='string', id=None),
'language': [{'name': Value(dtype='string', id=None), 'bytes': Value(dtype='string', id=None)}]}_
3. To remove the line causing the problem that includes the **symlink_target: string>** do:
```bash
sed -i '112252d' filtered_27f05c041a1c401783f90b9415e40e4b.sampled.jsonl
```
4. Rerun the loading function now is succesful:
```python
from datasets import load_dataset
ds = load_dataset('json', data_files='/path_to/filtered_27f05c041a1c401783f90b9415e40e4b.sampled.jsonl')
```
### Expected behavior
Have a clean dataset without discrepancies on the jsonl fields or have the load_dataset('json',...) method not error out.
### Environment info
- `datasets` version: 2.14.1
- Platform: Linux-4.18.0-425.13.1.el8_7.x86_64-x86_64-with-glibc2.28
- Python version: 3.9.17
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 2.0.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6162/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6162/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6161 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6161/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6161/comments | https://api.github.com/repos/huggingface/datasets/issues/6161/events | https://github.com/huggingface/datasets/pull/6161 | 1,855,794,354 | PR_kwDODunzps5YM0g7 | 6,161 | Fix protocol prefix for Beam | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 4 | 2023-08-17T22:40:37 | 2023-08-18T13:47:59 | null | CONTRIBUTOR | null | Fix #6147 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6161/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6161/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6161",
"html_url": "https://github.com/huggingface/datasets/pull/6161",
"diff_url": "https://github.com/huggingface/datasets/pull/6161.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6161.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6160 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6160/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6160/comments | https://api.github.com/repos/huggingface/datasets/issues/6160/events | https://github.com/huggingface/datasets/pull/6160 | 1,855,760,543 | PR_kwDODunzps5YMtLQ | 6,160 | Fix Parquet loading with `columns` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-08-17T21:58:24 | 2023-08-17T22:44:59 | 2023-08-17T22:36:04 | CONTRIBUTOR | null | Fix #6149 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6160/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6160/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6160",
"html_url": "https://github.com/huggingface/datasets/pull/6160",
"diff_url": "https://github.com/huggingface/datasets/pull/6160.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6160.patch",
"merged_at": "2023-08-17T22:36:04"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6159 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6159/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6159/comments | https://api.github.com/repos/huggingface/datasets/issues/6159/events | https://github.com/huggingface/datasets/issues/6159 | 1,855,691,512 | I_kwDODunzps5um5r4 | 6,159 | Add `BoundingBox` feature | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2023-08-17T20:49:51 | 2023-08-17T20:49:51 | null | CONTRIBUTOR | null | ... to make working with object detection datasets easier. Currently, `Sequence(int_or_float, length=4)` can be used to represent this feature optimally (in the storage backend), so I only see this feature being useful if we make it work with the viewer. Also, bounding boxes usually come in 4 different formats (explained [here](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/)), so we need to decide which one to support (or maybe all of them).
cc @NielsRogge @severo | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6159/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6159/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6158 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6158/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6158/comments | https://api.github.com/repos/huggingface/datasets/issues/6158/events | https://github.com/huggingface/datasets/pull/6158 | 1,855,374,220 | PR_kwDODunzps5YLZBf | 6,158 | [docs] Complete `to_iterable_dataset` | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-08-17T17:02:11 | 2023-08-17T19:24:20 | 2023-08-17T19:13:15 | MEMBER | null | Finishes the `to_iterable_dataset` documentation by adding it to the relevant sections in the tutorial and guide. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6158/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6158/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6158",
"html_url": "https://github.com/huggingface/datasets/pull/6158",
"diff_url": "https://github.com/huggingface/datasets/pull/6158.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6158.patch",
"merged_at": "2023-08-17T19:13:15"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6157 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6157/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6157/comments | https://api.github.com/repos/huggingface/datasets/issues/6157/events | https://github.com/huggingface/datasets/issues/6157 | 1,855,265,663 | I_kwDODunzps5ulRt_ | 6,157 | DatasetInfo.__init__() got an unexpected keyword argument '_column_requires_decoding' | {
"login": "AisingioroHao0",
"id": 51043929,
"node_id": "MDQ6VXNlcjUxMDQzOTI5",
"avatar_url": "https://avatars.githubusercontent.com/u/51043929?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AisingioroHao0",
"html_url": "https://github.com/AisingioroHao0",
"followers_url": "https://api.github.com/users/AisingioroHao0/followers",
"following_url": "https://api.github.com/users/AisingioroHao0/following{/other_user}",
"gists_url": "https://api.github.com/users/AisingioroHao0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AisingioroHao0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AisingioroHao0/subscriptions",
"organizations_url": "https://api.github.com/users/AisingioroHao0/orgs",
"repos_url": "https://api.github.com/users/AisingioroHao0/repos",
"events_url": "https://api.github.com/users/AisingioroHao0/events{/privacy}",
"received_events_url": "https://api.github.com/users/AisingioroHao0/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 11 | 2023-08-17T15:48:11 | 2023-09-01T17:38:26 | null | NONE | null | ### Describe the bug
When I was in load_dataset, it said "DatasetInfo.__init__() got an unexpected keyword argument '_column_requires_decoding'". The second time I ran it, there was no error and the dataset object worked
```python
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[3], line 1
----> 1 dataset = load_dataset(
2 "/home/aihao/workspace/DeepLearningContent/datasets/manga",
3 data_dir="/home/aihao/workspace/DeepLearningContent/datasets/manga",
4 split="train",
5 )
File [~/miniconda3/envs/torch/lib/python3.11/site-packages/datasets/load.py:2146](https://vscode-remote+ssh-002dremote-002bhome.vscode-resource.vscode-cdn.net/home/aihao/workspace/DeepLearningContent/datasets/~/miniconda3/envs/torch/lib/python3.11/site-packages/datasets/load.py:2146), in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)
2142 # Build dataset for splits
2143 keep_in_memory = (
2144 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
2145 )
-> 2146 ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory)
2147 # Rename and cast features to match task schema
2148 if task is not None:
2149 # To avoid issuing the same warning twice
File [~/miniconda3/envs/torch/lib/python3.11/site-packages/datasets/builder.py:1190](https://vscode-remote+ssh-002dremote-002bhome.vscode-resource.vscode-cdn.net/home/aihao/workspace/DeepLearningContent/datasets/~/miniconda3/envs/torch/lib/python3.11/site-packages/datasets/builder.py:1190), in DatasetBuilder.as_dataset(self, split, run_post_process, verification_mode, ignore_verifications, in_memory)
1187 verification_mode = VerificationMode(verification_mode or VerificationMode.BASIC_CHECKS)
1189 # Create a dataset for each of the given splits
-> 1190 datasets = map_nested(
1191 partial(
1192 self._build_single_dataset,
...
File [~/miniconda3/envs/torch/lib/python3.11/site-packages/datasets/info.py:379](https://vscode-remote+ssh-002dremote-002bhome.vscode-resource.vscode-cdn.net/home/aihao/workspace/DeepLearningContent/datasets/~/miniconda3/envs/torch/lib/python3.11/site-packages/datasets/info.py:379), in DatasetInfo.copy(self)
378 def copy(self) -> "DatasetInfo":
--> 379 return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})
TypeError: DatasetInfo.__init__() got an unexpected keyword argument '_column_requires_decoding'
```
### Steps to reproduce the bug
/home/aihao/workspace/DeepLearningContent/datasets/images/images.py
```python
from logging import config
import datasets
import os
from PIL import Image
import csv
import json
class ImagesConfig(datasets.BuilderConfig):
def __init__(self, **kwargs):
super(ImagesConfig, self).__init__(**kwargs)
class Images(datasets.GeneratorBasedBuilder):
def _split_generators(self, dl_manager: datasets.DownloadManager):
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN,
gen_kwargs={"split": datasets.Split.TRAIN},
)
]
BUILDER_CONFIGS = [
ImagesConfig(
name="similar_pairs",
description="simliar pair dataset,item is a pair of similar images",
),
ImagesConfig(
name="image_prompt_pairs",
description="image prompt pairs",
),
]
def _info(self):
if self.config.name == "similar_pairs":
return datasets.Features(
{
"image1": datasets.features.Image(),
"image2": datasets.features.Image(),
"similarity": datasets.Value("float32"),
}
)
elif self.config.name == "image_prompt_pairs":
return datasets.Features(
{"image": datasets.features.Image(), "prompt": datasets.Value("string")}
)
def _generate_examples(self, split):
data_path = os.path.join(self.config.data_dir, "data")
if self.config.name == "similar_pairs":
prompts = {}
with open(os.path.join(data_path ,"prompts.json"), "r") as f:
prompts = json.load(f)
with open(os.path.join(data_path, "similar_pairs.csv"), "r") as f:
reader = csv.reader(f)
for row in reader:
image1_path, image2_path, similarity = row
yield image1_path + ":" + image2_path + ":", {
"image1": Image.open(image1_path),
"prompt1": prompts[image1_path],
"image2": Image.open(image2_path),
"prompt2": prompts[image2_path],
"similarity": float(similarity),
}
```
Code that indicates an error:
```python
from datasets import load_dataset
import json
import csv
import ast
import torch
data_dir = "/home/aihao/workspace/DeepLearningContent/datasets/images"
dataset = load_dataset(data_dir, data_dir=data_dir, name="similar_pairs")
```
### Expected behavior
The first execution gives an error, but it works fine
### Environment info
- `datasets` version: 2.14.3
- Platform: Linux-6.2.0-26-generic-x86_64-with-glibc2.35
- Python version: 3.11.4
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 2.0.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6157/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6157/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6156 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6156/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6156/comments | https://api.github.com/repos/huggingface/datasets/issues/6156/events | https://github.com/huggingface/datasets/issues/6156 | 1,854,768,618 | I_kwDODunzps5ujYXq | 6,156 | Why not use self._epoch as seed to shuffle in distributed training with IterableDataset | {
"login": "npuichigo",
"id": 11533479,
"node_id": "MDQ6VXNlcjExNTMzNDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/npuichigo",
"html_url": "https://github.com/npuichigo",
"followers_url": "https://api.github.com/users/npuichigo/followers",
"following_url": "https://api.github.com/users/npuichigo/following{/other_user}",
"gists_url": "https://api.github.com/users/npuichigo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/npuichigo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/npuichigo/subscriptions",
"organizations_url": "https://api.github.com/users/npuichigo/orgs",
"repos_url": "https://api.github.com/users/npuichigo/repos",
"events_url": "https://api.github.com/users/npuichigo/events{/privacy}",
"received_events_url": "https://api.github.com/users/npuichigo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-08-17T10:58:20 | 2023-08-17T14:33:15 | 2023-08-17T14:33:14 | CONTRIBUTOR | null | ### Describe the bug
Currently, distributed training with `IterableDataset` needs to pass fixed seed to shuffle to keep each node use the same seed to avoid overlapping.
https://github.com/huggingface/datasets/blob/a7f8d9019e7cb104eac4106bdc6ec0292f0dc61a/src/datasets/iterable_dataset.py#L1174-L1177
My question is why not directly use `self._epoch` which is set by `set_epoch` as seed? It's almost the same across nodes.
https://github.com/huggingface/datasets/blob/a7f8d9019e7cb104eac4106bdc6ec0292f0dc61a/src/datasets/iterable_dataset.py#L1790-L1801
If not using `self._epoch` as shuffling seed, what does this method do to prepare an epoch seeded generator?
https://github.com/huggingface/datasets/blob/a7f8d9019e7cb104eac4106bdc6ec0292f0dc61a/src/datasets/iterable_dataset.py#L1206
### Steps to reproduce the bug
As mentioned above.
### Expected behavior
As mentioned above.
### Environment info
Not related | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6156/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6156/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6155 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6155/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6155/comments | https://api.github.com/repos/huggingface/datasets/issues/6155/events | https://github.com/huggingface/datasets/pull/6155 | 1,854,661,682 | PR_kwDODunzps5YI8Pc | 6,155 | Raise FileNotFoundError when passing data_files that don't exist | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 5 | 2023-08-17T09:49:48 | 2023-08-18T13:45:58 | 2023-08-18T13:35:13 | MEMBER | null | e.g. when running `load_dataset("parquet", data_files="doesnt_exist.parquet")` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6155/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6155/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6155",
"html_url": "https://github.com/huggingface/datasets/pull/6155",
"diff_url": "https://github.com/huggingface/datasets/pull/6155.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6155.patch",
"merged_at": "2023-08-18T13:35:13"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6154 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6154/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6154/comments | https://api.github.com/repos/huggingface/datasets/issues/6154/events | https://github.com/huggingface/datasets/pull/6154 | 1,854,595,943 | PR_kwDODunzps5YItlH | 6,154 | Use yaml instead of get data patterns when possible | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 6 | 2023-08-17T09:17:05 | 2023-08-17T20:46:25 | 2023-08-17T20:37:19 | MEMBER | null | This would make the data files resolution faster: no need to list all the data files to infer the dataset builder to use.
fix https://github.com/huggingface/datasets/issues/6140 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6154/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6154/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6154",
"html_url": "https://github.com/huggingface/datasets/pull/6154",
"diff_url": "https://github.com/huggingface/datasets/pull/6154.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6154.patch",
"merged_at": "2023-08-17T20:37:19"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6152 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6152/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6152/comments | https://api.github.com/repos/huggingface/datasets/issues/6152/events | https://github.com/huggingface/datasets/issues/6152 | 1,852,494,646 | I_kwDODunzps5uatM2 | 6,152 | FolderBase Dataset automatically resolves under current directory when data_dir is not specified | {
"login": "npuichigo",
"id": 11533479,
"node_id": "MDQ6VXNlcjExNTMzNDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/npuichigo",
"html_url": "https://github.com/npuichigo",
"followers_url": "https://api.github.com/users/npuichigo/followers",
"following_url": "https://api.github.com/users/npuichigo/following{/other_user}",
"gists_url": "https://api.github.com/users/npuichigo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/npuichigo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/npuichigo/subscriptions",
"organizations_url": "https://api.github.com/users/npuichigo/orgs",
"repos_url": "https://api.github.com/users/npuichigo/repos",
"events_url": "https://api.github.com/users/npuichigo/events{/privacy}",
"received_events_url": "https://api.github.com/users/npuichigo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | open | false | null | [] | null | 4 | 2023-08-16T04:38:09 | 2023-08-17T13:45:18 | null | CONTRIBUTOR | null | ### Describe the bug
FolderBase Dataset automatically resolves under current directory when data_dir is not specified.
For example:
```
load_dataset("audiofolder")
```
takes long time to resolve and collect data_files from current directory. But I think it should reach out to this line for error handling https://github.com/huggingface/datasets/blob/cb8c5de5145c7e7eee65391cb7f4d92f0d565d62/src/datasets/packaged_modules/folder_based_builder/folder_based_builder.py#L58-L59
### Steps to reproduce the bug
```
load_dataset("audiofolder")
```
### Expected behavior
Error report
### Environment info
- `datasets` version: 2.14.4
- Platform: Linux-5.15.0-78-generic-x86_64-with-glibc2.17
- Python version: 3.8.15
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 1.5.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6152/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6152/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6151 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6151/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6151/comments | https://api.github.com/repos/huggingface/datasets/issues/6151/events | https://github.com/huggingface/datasets/issues/6151 | 1,851,497,818 | I_kwDODunzps5uW51a | 6,151 | Faster sorting for single key items | {
"login": "jackapbutler",
"id": 47942453,
"node_id": "MDQ6VXNlcjQ3OTQyNDUz",
"avatar_url": "https://avatars.githubusercontent.com/u/47942453?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jackapbutler",
"html_url": "https://github.com/jackapbutler",
"followers_url": "https://api.github.com/users/jackapbutler/followers",
"following_url": "https://api.github.com/users/jackapbutler/following{/other_user}",
"gists_url": "https://api.github.com/users/jackapbutler/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jackapbutler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jackapbutler/subscriptions",
"organizations_url": "https://api.github.com/users/jackapbutler/orgs",
"repos_url": "https://api.github.com/users/jackapbutler/repos",
"events_url": "https://api.github.com/users/jackapbutler/events{/privacy}",
"received_events_url": "https://api.github.com/users/jackapbutler/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2023-08-15T14:02:31 | 2023-08-21T14:38:26 | 2023-08-21T14:38:25 | NONE | null | ### Feature request
A faster way to sort a dataset which contains a large number of rows.
### Motivation
The current sorting implementations took significantly longer than expected when I was running on a dataset trying to sort by timestamps.
**Code snippet:**
```python
ds = datasets.load_dataset( "json", **{"data_files": {"train": "path-to-jsonlines"}, "split": "train"}, num_proc=os.cpu_count(), keep_in_memory=True)
sorted_ds = ds.sort("pubDate", keep_in_memory=True)
```
However, once I switched to a different method which
1. unpacked to a list of tuples
2. sorted tuples by key
3. run `.select` with the sorted list of indices
It was significantly faster (orders of magnitude, especially with M's of rows)
### Your contribution
I'd be happy to implement a crude single key sorting algorithm so that other users can benefit from this trick. Broadly, this would take a `Dataset` and perform;
```python
# ds is a Dataset object
# key_name is the sorting key
class Dataset:
...
def _sort(key_name: str) -> Dataset:
index_keys = [(i,x) for i,x in enumerate(self[key_name])]
sorted_rows = sorted(row_pubdate, key=lambda x: x[1])
sorted_indicies = [x[0] for x in sorted_rows]
return self.select(sorted_indicies)
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6151/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6151/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6150 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6150/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6150/comments | https://api.github.com/repos/huggingface/datasets/issues/6150/events | https://github.com/huggingface/datasets/issues/6150 | 1,850,740,456 | I_kwDODunzps5uUA7o | 6,150 | Allow dataset implement .take | {
"login": "brando90",
"id": 1855278,
"node_id": "MDQ6VXNlcjE4NTUyNzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1855278?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brando90",
"html_url": "https://github.com/brando90",
"followers_url": "https://api.github.com/users/brando90/followers",
"following_url": "https://api.github.com/users/brando90/following{/other_user}",
"gists_url": "https://api.github.com/users/brando90/gists{/gist_id}",
"starred_url": "https://api.github.com/users/brando90/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brando90/subscriptions",
"organizations_url": "https://api.github.com/users/brando90/orgs",
"repos_url": "https://api.github.com/users/brando90/repos",
"events_url": "https://api.github.com/users/brando90/events{/privacy}",
"received_events_url": "https://api.github.com/users/brando90/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 4 | 2023-08-15T00:17:51 | 2023-08-17T13:49:37 | null | NONE | null | ### Feature request
I want to do:
```
dataset.take(512)
```
but it only works with streaming = True
### Motivation
uniform interface to data sets. Really surprising the above only works with streaming = True.
### Your contribution
Should be trivial to copy paste the IterableDataset .take to use the local path in the data (when streaming = False) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6150/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6150/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6149 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6149/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6149/comments | https://api.github.com/repos/huggingface/datasets/issues/6149/events | https://github.com/huggingface/datasets/issues/6149 | 1,850,700,624 | I_kwDODunzps5uT3NQ | 6,149 | Dataset.from_parquet cannot load subset of columns | {
"login": "dwyatte",
"id": 2512762,
"node_id": "MDQ6VXNlcjI1MTI3NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2512762?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dwyatte",
"html_url": "https://github.com/dwyatte",
"followers_url": "https://api.github.com/users/dwyatte/followers",
"following_url": "https://api.github.com/users/dwyatte/following{/other_user}",
"gists_url": "https://api.github.com/users/dwyatte/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dwyatte/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dwyatte/subscriptions",
"organizations_url": "https://api.github.com/users/dwyatte/orgs",
"repos_url": "https://api.github.com/users/dwyatte/repos",
"events_url": "https://api.github.com/users/dwyatte/events{/privacy}",
"received_events_url": "https://api.github.com/users/dwyatte/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | 1 | 2023-08-14T23:28:22 | 2023-08-17T22:36:05 | 2023-08-17T22:36:05 | CONTRIBUTOR | null | ### Describe the bug
When using `Dataset.from_parquet(path_or_paths, columns=[...])` and a subset of columns, loading fails with a variant of the following
```
ValueError: Couldn't cast
a: int64
-- schema metadata --
pandas: '{"index_columns": [], "column_indexes": [], "columns": [{"name":' + 273
to
{'a': Value(dtype='int64', id=None), 'b': Value(dtype='int64', id=None)}
because column names don't match
The above exception was the direct cause of the following exception:
```
Looks to be triggered by https://github.com/huggingface/datasets/blob/c02a44715c036b5261686669727394b1308a3a4b/src/datasets/table.py#L2285-L2286
### Steps to reproduce the bug
```
import pandas as pd
from datasets import Dataset
pd.DataFrame([{"a": 1, "b": 2}]).to_parquet("test.pq")
Dataset.from_parquet("test.pq", columns=["a"])
```
### Expected behavior
A subset of columns should be loaded without error
### Environment info
- `datasets` version: 2.14.4
- Platform: Linux-5.10.0-23-cloud-amd64-x86_64-with-glibc2.2.5
- Python version: 3.8.16
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 2.0.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6149/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6149/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6148 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6148/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6148/comments | https://api.github.com/repos/huggingface/datasets/issues/6148/events | https://github.com/huggingface/datasets/pull/6148 | 1,849,524,683 | PR_kwDODunzps5X3oqv | 6,148 | Ignore parallel warning in map_nested | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-08-14T10:43:41 | 2023-08-17T08:54:06 | 2023-08-17T08:43:58 | MEMBER | null | This warning message was shown every time you pass num_proc to `load_dataset` because of `map_nested`
```
parallel_map is experimental and might be subject to breaking changes in the future
```
This PR removes it for `map_nested`. If someone uses another parallel backend they're already warned when `parallel_backend` is called anyway | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6148/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6148/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6148",
"html_url": "https://github.com/huggingface/datasets/pull/6148",
"diff_url": "https://github.com/huggingface/datasets/pull/6148.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6148.patch",
"merged_at": "2023-08-17T08:43:58"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6147 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6147/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6147/comments | https://api.github.com/repos/huggingface/datasets/issues/6147/events | https://github.com/huggingface/datasets/issues/6147 | 1,848,914,830 | I_kwDODunzps5uNDOO | 6,147 | ValueError when running BeamBasedBuilder with GCS path in cache_dir | {
"login": "ktrk115",
"id": 13844767,
"node_id": "MDQ6VXNlcjEzODQ0NzY3",
"avatar_url": "https://avatars.githubusercontent.com/u/13844767?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ktrk115",
"html_url": "https://github.com/ktrk115",
"followers_url": "https://api.github.com/users/ktrk115/followers",
"following_url": "https://api.github.com/users/ktrk115/following{/other_user}",
"gists_url": "https://api.github.com/users/ktrk115/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ktrk115/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ktrk115/subscriptions",
"organizations_url": "https://api.github.com/users/ktrk115/orgs",
"repos_url": "https://api.github.com/users/ktrk115/repos",
"events_url": "https://api.github.com/users/ktrk115/events{/privacy}",
"received_events_url": "https://api.github.com/users/ktrk115/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2023-08-14T03:11:34 | 2023-08-14T03:19:43 | null | NONE | null | ### Describe the bug
When running the BeamBasedBuilder with a GCS path specified in the cache_dir, the following ValueError occurs:
```
ValueError: Unable to get filesystem from specified path, please use the correct path or ensure the required dependency is installed, e.g., pip install apache-beam[gcp]. Path specified: gcs://my-bucket/huggingface_datasets/my_beam_dataset/default/0.0.0/my_beam_dataset-train [while running 'train/Save to parquet/Write/WriteImpl/InitializeWrite']
```
Same error occurs after running `pip install apache-beam[gcp]` as instructed.
### Steps to reproduce the bug
Put `my_beam_dataset.py`:
```python
import datasets
class MyBeamDataset(datasets.BeamBasedBuilder):
def _info(self):
features = datasets.Features({"value": datasets.Value("int64")})
return datasets.DatasetInfo(features=features)
def _split_generators(self, dl_manager, pipeline):
return [datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={})]
def _build_pcollection(self, pipeline):
import apache_beam as beam
return pipeline | beam.Create([{"value": i} for i in range(10)])
```
Run:
```bash
datasets-cli run_beam my_beam_dataset.py --cache_dir=gs://my-bucket/huggingface_datasets/ --beam_pipeline_options="runner=DirectRunner"
```
### Expected behavior
Running the BeamBasedBuilder with a GCS cache path without any errors.
### Environment info
- `datasets` version: 2.14.4
- Platform: macOS-13.4-arm64-arm-64bit
- Python version: 3.9.17
- Huggingface_hub version: 0.16.4
- PyArrow version: 9.0.0
- Pandas version: 2.0.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6147/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6147/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6146 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6146/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6146/comments | https://api.github.com/repos/huggingface/datasets/issues/6146/events | https://github.com/huggingface/datasets/issues/6146 | 1,848,417,366 | I_kwDODunzps5uLJxW | 6,146 | DatasetGenerationError when load glue benchmark datasets from `load_dataset` | {
"login": "yusx-swapp",
"id": 78742415,
"node_id": "MDQ6VXNlcjc4NzQyNDE1",
"avatar_url": "https://avatars.githubusercontent.com/u/78742415?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yusx-swapp",
"html_url": "https://github.com/yusx-swapp",
"followers_url": "https://api.github.com/users/yusx-swapp/followers",
"following_url": "https://api.github.com/users/yusx-swapp/following{/other_user}",
"gists_url": "https://api.github.com/users/yusx-swapp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yusx-swapp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yusx-swapp/subscriptions",
"organizations_url": "https://api.github.com/users/yusx-swapp/orgs",
"repos_url": "https://api.github.com/users/yusx-swapp/repos",
"events_url": "https://api.github.com/users/yusx-swapp/events{/privacy}",
"received_events_url": "https://api.github.com/users/yusx-swapp/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-08-13T05:17:56 | 2023-08-26T22:09:09 | 2023-08-26T22:09:09 | NONE | null | ### Describe the bug
Package version: datasets-2.14.4
When I run the codes:
```
from datasets import load_dataset
dataset = load_dataset("glue", "ax")
```
I got the following errors:
---------------------------------------------------------------------------
SchemaInferenceError Traceback (most recent call last)
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/builder.py:1949, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1948 num_shards = shard_id + 1
-> 1949 num_examples, num_bytes = writer.finalize()
1950 writer.close()
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/arrow_writer.py:598, in ArrowWriter.finalize(self, close_stream)
597 self.stream.close()
--> 598 raise SchemaInferenceError("Please pass `features` or at least one example when writing data")
599 logger.debug(
600 f"Done writing {self._num_examples} {self.unit} in {self._num_bytes} bytes {self._path if self._path else ''}."
601 )
SchemaInferenceError: Please pass `features` or at least one example when writing data
The above exception was the direct cause of the following exception:
DatasetGenerationError Traceback (most recent call last)
Cell In[5], line 3
1 from datasets import load_dataset
----> 3 dataset = load_dataset("glue", "ax")
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/load.py:2136, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)
2133 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES
2135 # Download and prepare data
-> 2136 builder_instance.download_and_prepare(
2137 download_config=download_config,
2138 download_mode=download_mode,
2139 verification_mode=verification_mode,
2140 try_from_hf_gcs=try_from_hf_gcs,
2141 num_proc=num_proc,
2142 storage_options=storage_options,
2143 )
2145 # Build dataset for splits
2146 keep_in_memory = (
2147 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
2148 )
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/builder.py:954, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
952 if num_proc is not None:
953 prepare_split_kwargs["num_proc"] = num_proc
--> 954 self._download_and_prepare(
955 dl_manager=dl_manager,
956 verification_mode=verification_mode,
957 **prepare_split_kwargs,
958 **download_and_prepare_kwargs,
959 )
960 # Sync info
961 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/builder.py:1049, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
1045 split_dict.add(split_generator.split_info)
1047 try:
1048 # Prepare split will record examples associated to the split
-> 1049 self._prepare_split(split_generator, **prepare_split_kwargs)
1050 except OSError as e:
1051 raise OSError(
1052 "Cannot find data file. "
1053 + (self.manual_download_instructions or "")
1054 + "\nOriginal error:\n"
1055 + str(e)
1056 ) from None
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/builder.py:1813, in ArrowBasedBuilder._prepare_split(self, split_generator, file_format, num_proc, max_shard_size)
1811 job_id = 0
1812 with pbar:
-> 1813 for job_id, done, content in self._prepare_split_single(
1814 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args
1815 ):
1816 if done:
1817 result = content
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/builder.py:1958, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1956 if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
1957 e = e.__context__
-> 1958 raise DatasetGenerationError("An error occurred while generating the dataset") from e
1960 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)
DatasetGenerationError: An error occurred while generating the dataset
### Steps to reproduce the bug
from datasets import load_dataset
dataset = load_dataset("glue", "ax")
### Expected behavior
When generating the train split:
Generating train split:
0/0 [00:00<?, ? examples/s]
It raise the error:
DatasetGenerationError: An error occurred while generating the dataset
### Environment info
datasets-2.14.4.
Python 3.10 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6146/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6146/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6153 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6153/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6153/comments | https://api.github.com/repos/huggingface/datasets/issues/6153/events | https://github.com/huggingface/datasets/issues/6153 | 1,852,630,074 | I_kwDODunzps5ubOQ6 | 6,153 | custom load dataset to hub | {
"login": "andysingal",
"id": 20493493,
"node_id": "MDQ6VXNlcjIwNDkzNDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andysingal",
"html_url": "https://github.com/andysingal",
"followers_url": "https://api.github.com/users/andysingal/followers",
"following_url": "https://api.github.com/users/andysingal/following{/other_user}",
"gists_url": "https://api.github.com/users/andysingal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andysingal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andysingal/subscriptions",
"organizations_url": "https://api.github.com/users/andysingal/orgs",
"repos_url": "https://api.github.com/users/andysingal/repos",
"events_url": "https://api.github.com/users/andysingal/events{/privacy}",
"received_events_url": "https://api.github.com/users/andysingal/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 3 | 2023-08-13T04:42:22 | 2023-08-17T14:17:05 | null | NONE | null | ### System Info
kaggle notebook
i transformed dataset:
```
dataset = load_dataset("Dahoas/first-instruct-human-assistant-prompt")
```
to
formatted_dataset:
```
Dataset({
features: ['message_tree_id', 'message_tree_text'],
num_rows: 33143
})
```
but would like to know how to upload to hub
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
shared above
### Expected behavior
load dataset to hub | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6153/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6153/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6145 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6145/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6145/comments | https://api.github.com/repos/huggingface/datasets/issues/6145/events | https://github.com/huggingface/datasets/pull/6145 | 1,847,811,310 | PR_kwDODunzps5Xx5If | 6,145 | Export to_iterable_dataset to document | {
"login": "npuichigo",
"id": 11533479,
"node_id": "MDQ6VXNlcjExNTMzNDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/npuichigo",
"html_url": "https://github.com/npuichigo",
"followers_url": "https://api.github.com/users/npuichigo/followers",
"following_url": "https://api.github.com/users/npuichigo/following{/other_user}",
"gists_url": "https://api.github.com/users/npuichigo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/npuichigo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/npuichigo/subscriptions",
"organizations_url": "https://api.github.com/users/npuichigo/orgs",
"repos_url": "https://api.github.com/users/npuichigo/repos",
"events_url": "https://api.github.com/users/npuichigo/events{/privacy}",
"received_events_url": "https://api.github.com/users/npuichigo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-08-12T07:00:14 | 2023-08-15T17:04:01 | 2023-08-15T16:55:24 | CONTRIBUTOR | null | Fix the export of a missing method of `Dataset` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6145/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6145/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6145",
"html_url": "https://github.com/huggingface/datasets/pull/6145",
"diff_url": "https://github.com/huggingface/datasets/pull/6145.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6145.patch",
"merged_at": "2023-08-15T16:55:24"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6144 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6144/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6144/comments | https://api.github.com/repos/huggingface/datasets/issues/6144/events | https://github.com/huggingface/datasets/issues/6144 | 1,847,296,711 | I_kwDODunzps5uG4LH | 6,144 | NIH exporter file not found | {
"login": "brando90",
"id": 1855278,
"node_id": "MDQ6VXNlcjE4NTUyNzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1855278?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brando90",
"html_url": "https://github.com/brando90",
"followers_url": "https://api.github.com/users/brando90/followers",
"following_url": "https://api.github.com/users/brando90/following{/other_user}",
"gists_url": "https://api.github.com/users/brando90/gists{/gist_id}",
"starred_url": "https://api.github.com/users/brando90/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brando90/subscriptions",
"organizations_url": "https://api.github.com/users/brando90/orgs",
"repos_url": "https://api.github.com/users/brando90/repos",
"events_url": "https://api.github.com/users/brando90/events{/privacy}",
"received_events_url": "https://api.github.com/users/brando90/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 6 | 2023-08-11T19:05:25 | 2023-08-14T23:28:38 | null | NONE | null | ### Describe the bug
can't use or download the nih exporter pile data.
```
15 experiment_compute_diveristy_coeff_single_dataset_then_combined_datasets_with_domain_weights()
16 File "/lfs/ampere1/0/brando9/beyond-scale-language-data-diversity/src/diversity/div_coeff.py", line 474, in experiment_compute_diveristy_coeff_single_dataset_then_combined_datasets_with_domain_weights
17 column_names = next(iter(dataset)).keys()
18 File "/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 1353, in __iter__
19 for key, example in ex_iterable:
20 File "/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 207, in __iter__
21 yield from self.generate_examples_fn(**self.kwargs)
22 File "/lfs/ampere1/0/brando9/.cache/huggingface/modules/datasets_modules/datasets/EleutherAI--pile/ebea56d358e91cf4d37b0fde361d563bed1472fbd8221a21b38fc8bb4ba554fb/pile.py", line 236, in _generate_examples
23 with zstd.open(open(files[subset], "rb"), "rt", encoding="utf-8") as f:
24 File "/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/datasets/streaming.py", line 74, in wrapper
25 return function(*args, download_config=download_config, **kwargs)
26 File "/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py", line 496, in xopen
27 file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open()
28 File "/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/core.py", line 134, in open
29 return self.__enter__()
30 File "/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/core.py", line 102, in __enter__
31 f = self.fs.open(self.path, mode=mode)
32 File "/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/spec.py", line 1241, in open
33 f = self._open(
34 File "/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/implementations/http.py", line 356, in _open
35 size = size or self.info(path, **kwargs)["size"]
36 File "/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/asyn.py", line 121, in wrapper
37 return sync(self.loop, func, *args, **kwargs)
38 File "/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/asyn.py", line 106, in sync
39 raise return_result
40 File "/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/asyn.py", line 61, in _runner
41 result[0] = await coro
42 File "/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/implementations/http.py", line 430, in _info
43 raise FileNotFoundError(url) from exc
44 FileNotFoundError: https://the-eye.eu/public/AI/pile_preliminary_components/NIH_ExPORTER_awarded_grant_text.jsonl.zst
```
### Steps to reproduce the bug
run this:
```
from datasets import load_dataset
path, name = 'EleutherAI/pile', 'nih_exporter'
# -- Get data set
dataset = load_dataset(path, name, streaming=True, split="train").with_format("torch")
batch = dataset.take(512)
print(f'{batch=}')
```
### Expected behavior
print the batch
### Environment info
```
(beyond_scale) brando9@ampere1:~/beyond-scale-language-data-diversity$ datasets-cli env
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.14.4
- Platform: Linux-5.4.0-122-generic-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 2.0.3
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6144/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6144/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6142 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6142/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6142/comments | https://api.github.com/repos/huggingface/datasets/issues/6142/events | https://github.com/huggingface/datasets/issues/6142 | 1,846,205,216 | I_kwDODunzps5uCtsg | 6,142 | the-stack-dedup fails to generate | {
"login": "michaelroyzen",
"id": 45830328,
"node_id": "MDQ6VXNlcjQ1ODMwMzI4",
"avatar_url": "https://avatars.githubusercontent.com/u/45830328?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/michaelroyzen",
"html_url": "https://github.com/michaelroyzen",
"followers_url": "https://api.github.com/users/michaelroyzen/followers",
"following_url": "https://api.github.com/users/michaelroyzen/following{/other_user}",
"gists_url": "https://api.github.com/users/michaelroyzen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/michaelroyzen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/michaelroyzen/subscriptions",
"organizations_url": "https://api.github.com/users/michaelroyzen/orgs",
"repos_url": "https://api.github.com/users/michaelroyzen/repos",
"events_url": "https://api.github.com/users/michaelroyzen/events{/privacy}",
"received_events_url": "https://api.github.com/users/michaelroyzen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | 4 | 2023-08-11T05:10:49 | 2023-08-17T09:26:13 | 2023-08-17T09:26:13 | NONE | null | ### Describe the bug
I'm getting an error generating the-stack-dedup with datasets 2.13.1, and with 2.14.4 nothing happens.
### Steps to reproduce the bug
My code:
```
import os
import datasets as ds
MY_CACHE_DIR = "/home/ubuntu/the-stack-dedup-local"
MY_TOKEN="my-token"
the_stack_ds = ds.load_dataset("bigcode/the-stack-dedup", split="train", download_mode="reuse_cache_if_exists", cache_dir=MY_CACHE_DIR, use_auth_token=MY_TOKEN, num_proc=64)
```
The exception:
```
Generating train split: 233248251 examples [54:31, 57280.00 examples/s]
multiprocess.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/home/ubuntu/.local/lib/python3.10/site-packages/datasets/build
er.py", line 1879, in _prepare_split_single
for _, table in generator:
File "/home/ubuntu/.local/lib/python3.10/site-packages/datasets/packa
ged_modules/parquet/parquet.py", line 82, in _generate_tables
yield f"{file_idx}_{batch_idx}", self._cast_table(pa_table)
File "/home/ubuntu/.local/lib/python3.10/site-packages/datasets/packa
ged_modules/parquet/parquet.py", line 61, in _cast_table
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
File "/home/ubuntu/.local/lib/python3.10/site-packages/datasets/table
.py", line 2324, in table_cast
return cast_table_to_schema(table, schema)
File "/home/ubuntu/.local/lib/python3.10/site-packages/datasets/table
.py", line 2282, in cast_table_to_schema
raise ValueError(f"Couldn't cast\n{table.schema}\nto\n{features}\nb
ecause column names don't match")
ValueError: Couldn't cast
hexsha: string
size: int64
ext: string
lang: string
max_stars_repo_path: string
max_stars_repo_name: string
max_stars_repo_head_hexsha: string
max_stars_repo_licenses: list<item: string>
child 0, item: string
max_stars_count: int64
max_stars_repo_stars_event_min_datetime: string
max_stars_repo_stars_event_max_datetime: string
max_issues_repo_path: string
max_issues_repo_name: string
max_issues_repo_head_hexsha: string
max_issues_repo_licenses: list<item: string>
child 0, item: string
max_issues_count: int64
max_issues_repo_issues_event_min_datetime: string
max_issues_repo_issues_event_max_datetime: string
max_forks_repo_path: string
max_forks_repo_name: string
max_forks_repo_head_hexsha: string
max_forks_repo_licenses: list<item: string>
child 0, item: string
max_forks_count: int64
max_forks_repo_forks_event_min_datetime: string
max_forks_repo_forks_event_max_datetime: string
content: string
avg_line_length: double
max_line_length: int64
alphanum_fraction: double
__id__: int64
-- schema metadata --
huggingface: '{"info": {"features": {"hexsha": {"dtype": "string", "_type' + 1979
to
{'hexsha': Value(dtype='string', id=None), 'size': Value(dtype='int64', id=None), 'ext': Value(dtype='string', id=None), 'lang': Value(dtype='string', id=None), 'max_stars_repo_path': Value(dtype='string', id=None), 'max_stars_repo_name': Value(dtype='string', id=None), 'max_stars_repo_head_hexsha': Value(dtype='string', id=None), 'max_stars_repo_licenses': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'max_stars_count': Value(dtype='int64', id=None), 'max_stars_repo_stars_event_min_datetime': Value(dtype='string', id=None), 'max_stars_repo_stars_event_max_datetime': Value(dtype='string', id=None), 'max_issues_repo_path': Value(dtype='string', id=None), 'max_issues_repo_name': Value(dtype='string', id=None), 'max_issues_repo_head_hexsha': Value(dtype='string', id=None), 'max_issues_repo_licenses': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'max_issues_count': Value(dtype='int64', id=None), 'max_issues_repo_issues_event_min_datetime': Value(dtype='string', id=None), 'max_issues_repo_issues_event_max_datetime': Value(dtype='string', id=None), 'max_forks_repo_path': Value(dtype='string', id=None), 'max_forks_repo_name': Value(dtype='string', id=None), 'max_forks_repo_head_hexsha': Value(dtype='string', id=None), 'max_forks_repo_licenses': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'max_forks_count': Value(dtype='int64', id=None), 'max_forks_repo_forks_event_min_datetime': Value(dtype='string', id=None), 'max_forks_repo_forks_event_max_datetime': Value(dtype='string', id=None), 'content': Value(dtype='string', id=None), 'avg_line_length': Value(dtype='float64', id=None), 'max_line_length': Value(dtype='int64', id=None), 'alphanum_fraction': Value(dtype='float64', id=None)}
because column names don't match
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/ubuntu/.local/lib/python3.10/site-packages/multiprocess/p
ool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/home/ubuntu/.local/lib/python3.10/site-packages/datasets/utils
/py_utils.py", line 1328, in _write_generator_to_queue
for i, result in enumerate(func(**kwargs)):
File "/home/ubuntu/.local/lib/python3.10/site-packages/datasets/build
er.py", line 1912, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating th
e dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while genera
ting the dataset
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/ubuntu/download_the_stack.py", line 7, in <module>
the_stack_ds = ds.load_dataset("bigcode/the-stack-dedup", split="tr
ain", download_mode="reuse_cache_if_exists", cache_dir=MY_CACHE_DIR, us
e_auth_token=MY_TOKEN, num_proc=64)
File "/home/ubuntu/.local/lib/python3.10/site-packages/datasets/load.
py", line 1809, in load_dataset
builder_instance.download_and_prepare(
File "/home/ubuntu/.local/lib/python3.10/site-packages/datasets/build
er.py", line 909, in download_and_prepare
self._download_and_prepare(
File "/home/ubuntu/.local/lib/python3.10/site-packages/datasets/build
er.py", line 1004, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/ubuntu/.local/lib/python3.10/site-packages/datasets/build
er.py", line 1796, in _prepare_split
for job_id, done, content in iflatmap_unordered(
File "/home/ubuntu/.local/lib/python3.10/site-packages/datasets/utils
/py_utils.py", line 1354, in iflatmap_unordered
[async_result.get(timeout=0.05) for async_result in async_results]
File "/home/ubuntu/.local/lib/python3.10/site-packages/datasets/utils
/py_utils.py", line 1354, in <listcomp>
[async_result.get(timeout=0.05) for async_result in async_results]
File "/home/ubuntu/.local/lib/python3.10/site-packages/multiprocess/p
ool.py", line 774, in get
raise self._value
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
```
### Expected behavior
The dataset downloads properly. @lhoestq @loub
### Environment info
Datasets 2.13.1, large VM with 2TB RAM, Ubuntu 20.04 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6142/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6142/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6141 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6141/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6141/comments | https://api.github.com/repos/huggingface/datasets/issues/6141/events | https://github.com/huggingface/datasets/issues/6141 | 1,846,117,729 | I_kwDODunzps5uCYVh | 6,141 | TypeError: ClientSession._request() got an unexpected keyword argument 'https' | {
"login": "q935970314",
"id": 35994018,
"node_id": "MDQ6VXNlcjM1OTk0MDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/35994018?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/q935970314",
"html_url": "https://github.com/q935970314",
"followers_url": "https://api.github.com/users/q935970314/followers",
"following_url": "https://api.github.com/users/q935970314/following{/other_user}",
"gists_url": "https://api.github.com/users/q935970314/gists{/gist_id}",
"starred_url": "https://api.github.com/users/q935970314/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/q935970314/subscriptions",
"organizations_url": "https://api.github.com/users/q935970314/orgs",
"repos_url": "https://api.github.com/users/q935970314/repos",
"events_url": "https://api.github.com/users/q935970314/events{/privacy}",
"received_events_url": "https://api.github.com/users/q935970314/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-08-11T02:40:32 | 2023-08-30T13:51:33 | 2023-08-30T13:51:33 | NONE | null | ### Describe the bug
Hello, when I ran the [code snippet](https://huggingface.co/docs/datasets/v2.14.4/en/loading#json) on the document, I encountered the following problem:
```
Python 3.10.9 (main, Mar 1 2023, 18:23:06) [GCC 11.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from datasets import load_dataset
>>> base_url = "https://rajpurkar.github.io/SQuAD-explorer/dataset/"
>>> dataset = load_dataset("json", data_files={"train": base_url + "train-v1.1.json", "validation": base_url + "dev-v1.1.json"}, field="data")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/liushuai/anaconda3/lib/python3.10/site-packages/datasets/load.py", line 2112, in load_dataset
builder_instance = load_dataset_builder(
File "/home/liushuai/anaconda3/lib/python3.10/site-packages/datasets/load.py", line 1798, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/home/liushuai/anaconda3/lib/python3.10/site-packages/datasets/load.py", line 1413, in dataset_module_factory
).get_module()
File "/home/liushuai/anaconda3/lib/python3.10/site-packages/datasets/load.py", line 949, in get_module
data_files = DataFilesDict.from_patterns(
File "/home/liushuai/anaconda3/lib/python3.10/site-packages/datasets/data_files.py", line 672, in from_patterns
DataFilesList.from_patterns(
File "/home/liushuai/anaconda3/lib/python3.10/site-packages/datasets/data_files.py", line 578, in from_patterns
resolve_pattern(
File "/home/liushuai/anaconda3/lib/python3.10/site-packages/datasets/data_files.py", line 340, in resolve_pattern
for filepath, info in fs.glob(pattern, detail=True).items()
File "/home/liushuai/anaconda3/lib/python3.10/site-packages/fsspec/asyn.py", line 113, in wrapper
return sync(self.loop, func, *args, **kwargs)
File "/home/liushuai/anaconda3/lib/python3.10/site-packages/fsspec/asyn.py", line 98, in sync
raise return_result
File "/home/liushuai/anaconda3/lib/python3.10/site-packages/fsspec/asyn.py", line 53, in _runner
result[0] = await coro
File "/home/liushuai/anaconda3/lib/python3.10/site-packages/fsspec/implementations/http.py", line 449, in _glob
elif await self._exists(path):
File "/home/liushuai/anaconda3/lib/python3.10/site-packages/fsspec/implementations/http.py", line 306, in _exists
r = await session.get(self.encode_url(path), **kw)
File "/home/liushuai/anaconda3/lib/python3.10/site-packages/aiohttp/client.py", line 922, in get
self._request(hdrs.METH_GET, url, allow_redirects=allow_redirects, **kwargs)
TypeError: ClientSession._request() got an unexpected keyword argument 'https'
```
### Steps to reproduce the bug
```
from datasets import load_dataset
base_url = "https://rajpurkar.github.io/SQuAD-explorer/dataset/"
dataset = load_dataset("json", data_files={"train": base_url + "train-v1.1.json", "validation": base_url + "dev-v1.1.json"}, field="data")
```
### Expected behavior
able to load normally
### Environment info
- `datasets` version: 2.14.4
- Platform: Linux-5.4.54-2-x86_64-with-glibc2.27
- Python version: 3.10.9
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 1.5.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6141/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6141/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6140 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6140/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6140/comments | https://api.github.com/repos/huggingface/datasets/issues/6140/events | https://github.com/huggingface/datasets/issues/6140 | 1,845,384,712 | I_kwDODunzps5t_lYI | 6,140 | Misalignment between file format specified in configs metadata YAML and the inferred builder | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 0 | 2023-08-10T15:07:34 | 2023-08-17T20:37:20 | 2023-08-17T20:37:20 | MEMBER | null | There is a misalignment between the format of the `data_files` specified in the configs metadata YAML (CSV):
```yaml
configs:
- config_name: default
data_files:
- split: train
path: data.csv
```
and the inferred builder (JSON). Note there are multiple JSON files in the repo, but they do not appear in the configs metadata YAML.
See: https://huggingface.co/datasets/freddyaboulton/chatinterface_with_image_csv/discussions/1
CC: @freddyaboulton @polinaeterna | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6140/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6140/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6139 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6139/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6139/comments | https://api.github.com/repos/huggingface/datasets/issues/6139/events | https://github.com/huggingface/datasets/issues/6139 | 1,844,991,583 | I_kwDODunzps5t-FZf | 6,139 | Offline dataset viewer | {
"login": "yuvalkirstain",
"id": 57996478,
"node_id": "MDQ6VXNlcjU3OTk2NDc4",
"avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuvalkirstain",
"html_url": "https://github.com/yuvalkirstain",
"followers_url": "https://api.github.com/users/yuvalkirstain/followers",
"following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}",
"gists_url": "https://api.github.com/users/yuvalkirstain/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuvalkirstain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuvalkirstain/subscriptions",
"organizations_url": "https://api.github.com/users/yuvalkirstain/orgs",
"repos_url": "https://api.github.com/users/yuvalkirstain/repos",
"events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuvalkirstain/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 3 | 2023-08-10T11:30:00 | 2023-08-26T19:30:40 | null | NONE | null | ### Feature request
The dataset viewer feature is very nice. It enables to the user to easily view the dataset. However, when working for private companies we cannot always upload the dataset to the hub. Is there a way to create dataset viewer offline? I.e. to run a code that will open some kind of html or something that makes it easy to view the dataset.
### Motivation
I want to easily view my dataset even when it is hosted locally.
### Your contribution
N.A. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6139/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6139/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6138 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6138/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6138/comments | https://api.github.com/repos/huggingface/datasets/issues/6138/events | https://github.com/huggingface/datasets/pull/6138 | 1,844,952,496 | PR_kwDODunzps5XoH2V | 6,138 | Ignore CI lint rule violation in Pickler.memoize | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-08-10T11:03:15 | 2023-08-10T11:31:45 | 2023-08-10T11:22:56 | MEMBER | null | This PR ignores the violation of the lint rule E721 in `Pickler.memoize`.
The lint rule violation was introduced in this PR:
- #3182
@lhoestq is there a reason you did not use `isinstance` instead?
As a hotfix, we just ignore the violation of the lint rule.
Fix #6136. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6138/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6138/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6138",
"html_url": "https://github.com/huggingface/datasets/pull/6138",
"diff_url": "https://github.com/huggingface/datasets/pull/6138.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6138.patch",
"merged_at": "2023-08-10T11:22:56"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6137 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6137/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6137/comments | https://api.github.com/repos/huggingface/datasets/issues/6137/events | https://github.com/huggingface/datasets/issues/6137 | 1,844,952,312 | I_kwDODunzps5t97z4 | 6,137 | (`from_spark()`) Unable to connect HDFS in pyspark YARN setting | {
"login": "kyoungrok0517",
"id": 1051900,
"node_id": "MDQ6VXNlcjEwNTE5MDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1051900?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kyoungrok0517",
"html_url": "https://github.com/kyoungrok0517",
"followers_url": "https://api.github.com/users/kyoungrok0517/followers",
"following_url": "https://api.github.com/users/kyoungrok0517/following{/other_user}",
"gists_url": "https://api.github.com/users/kyoungrok0517/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kyoungrok0517/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kyoungrok0517/subscriptions",
"organizations_url": "https://api.github.com/users/kyoungrok0517/orgs",
"repos_url": "https://api.github.com/users/kyoungrok0517/repos",
"events_url": "https://api.github.com/users/kyoungrok0517/events{/privacy}",
"received_events_url": "https://api.github.com/users/kyoungrok0517/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2023-08-10T11:03:08 | 2023-08-10T11:03:08 | null | NONE | null | ### Describe the bug
related issue: https://github.com/apache/arrow/issues/37057#issue-1841013613
---
Hello. I'm trying to interact with HDFS storage from a driver and workers of pyspark YARN cluster. Precisely I'm using **huggingface's `datasets`** ([link](https://github.com/huggingface/datasets)) library that relies on pyarrow to communicate with HDFS. The `from_spark()` ([link](https://huggingface.co/docs/datasets/use_with_spark#load-from-spark)) is what I'm invoking in my script.
Below is the error I'm encountering. Note that I've masked sensitive paths. My code is sent to worker containers (docker) from driver container then executed. I confirmed that in both driver and worker images I can connect to HDFS using pyarrow since the envs and required jars are properly set, but strangely that becomes impossible when the same image runs as remote worker process.
These are some peculiarities in my environment that might caused this issue.
* **Cluster requires kerberos authentication**
* But I think the error message implies that's not the problem in this case
* **The user that runs the worker process is different from that built the docker image**
* To avoid permission-related issues I made all directories that are accessed from the script accessible to everyone
* **Pyspark-part of my code has no problem interacting with HDFS.**
* Even pyarrow doesn't experience problem when I run the code in interactive session of the same docker images (driver, worker)
* The problem occurs only when it runs as cluster's worker runtime
Hope I could get some help. Thanks.
```bash
2023-08-08 18:51:19,638 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2023-08-08 18:51:20,280 WARN shortcircuit.DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
23/08/08 18:51:22 WARN TaskSetManager: Lost task 0.0 in stage 142.0 (TID 9732) (ac3bax2062.bdp.bdata.ai executor 1): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "<MASKED>/application_1682476586273_25865777/container_e143_1682476586273_25865777_01_000003/pyspark.zip/pyspark/worker.py", line 830, in main
process()
File "<MASKED>/application_1682476586273_25865777/container_e143_1682476586273_25865777_01_000003/pyspark.zip/pyspark/worker.py", line 820, in process
out_iter = func(split_index, iterator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/spark/python/pyspark/rdd.py", line 5405, in pipeline_func
File "/root/spark/python/pyspark/rdd.py", line 828, in func
File "/opt/conda/lib/python3.11/site-packages/datasets/packaged_modules/spark/spark.py", line 130, in create_cache_and_write_probe
open(probe_file, "a")
File "/opt/conda/lib/python3.11/site-packages/datasets/streaming.py", line 74, in wrapper
return function(*args, download_config=download_config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/datasets/download/streaming_download_manager.py", line 496, in xopen
file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/fsspec/core.py", line 439, in open
out = open_files(
^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/fsspec/core.py", line 282, in open_files
fs, fs_token, paths = get_fs_token_paths(
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/fsspec/core.py", line 609, in get_fs_token_paths
fs = filesystem(protocol, **inkwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/fsspec/registry.py", line 267, in filesystem
return cls(**storage_options)
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/fsspec/spec.py", line 79, in __call__
obj = super().__call__(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/fsspec/implementations/arrow.py", line 278, in __init__
fs = HadoopFileSystem(
^^^^^^^^^^^^^^^^^
File "pyarrow/_hdfs.pyx", line 96, in pyarrow._hdfs.HadoopFileSystem.__init__
File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 115, in pyarrow.lib.check_status
OSError: HDFS connection failed
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:561)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:767)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:749)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:514)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator.foreach(Iterator.scala:943)
at scala.collection.Iterator.foreach$(Iterator.scala:943)
at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
at scala.collection.TraversableOnce.to(TraversableOnce.scala:366)
at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364)
at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:358)
at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:358)
at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:345)
at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339)
at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28)
at org.apache.spark.rdd.RDD.$anonfun$collect$2(RDD.scala:1019)
at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2303)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:92)
at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161)
at org.apache.spark.scheduler.Task.run(Task.scala:139)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1529)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
23/08/08 18:51:24 WARN TaskSetManager: Lost task 0.1 in stage 142.0 (TID 9733) (ac3iax2079.bdp.bdata.ai executor 2): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "<MASKED>/application_1682476586273_25865777/container_e143_1682476586273_25865777_01_000005/pyspark.zip/pyspark/worker.py", line 830, in main
process()
File "<MASKED>/application_1682476586273_25865777/container_e143_1682476586273_25865777_01_000005/pyspark.zip/pyspark/worker.py", line 820, in process
out_iter = func(split_index, iterator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/spark/python/pyspark/rdd.py", line 5405, in pipeline_func
File "/root/spark/python/pyspark/rdd.py", line 828, in func
File "/opt/conda/lib/python3.11/site-packages/datasets/packaged_modules/spark/spark.py", line 130, in create_cache_and_write_probe
open(probe_file, "a")
File "/opt/conda/lib/python3.11/site-packages/datasets/streaming.py", line 74, in wrapper
return function(*args, download_config=download_config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/datasets/download/streaming_download_manager.py", line 496, in xopen
file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/fsspec/core.py", line 439, in open
out = open_files(
^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/fsspec/core.py", line 282, in open_files
fs, fs_token, paths = get_fs_token_paths(
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/fsspec/core.py", line 609, in get_fs_token_paths
fs = filesystem(protocol, **inkwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/fsspec/registry.py", line 267, in filesystem
return cls(**storage_options)
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/fsspec/spec.py", line 79, in __call__
obj = super().__call__(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/fsspec/implementations/arrow.py", line 278, in __init__
fs = HadoopFileSystem(
^^^^^^^^^^^^^^^^^
File "pyarrow/_hdfs.pyx", line 96, in pyarrow._hdfs.HadoopFileSystem.__init__
File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 115, in pyarrow.lib.check_status
OSError: HDFS connection failed
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:561)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:767)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:749)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:514)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator.foreach(Iterator.scala:943)
at scala.collection.Iterator.foreach$(Iterator.scala:943)
at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
at scala.collection.TraversableOnce.to(TraversableOnce.scala:366)
at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364)
at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:358)
at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:358)
at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:345)
at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339)
at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28)
at org.apache.spark.rdd.RDD.$anonfun$collect$2(RDD.scala:1019)
at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2303)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:92)
at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161)
at org.apache.spark.scheduler.Task.run(Task.scala:139)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1529)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
23/08/08 18:51:38 WARN TaskSetManager: Lost task 0.2 in stage 142.0 (TID 9734) (<MASKED> executor 4): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "<MASKED>/application_1682476586273_25865777/container_e143_1682476586273_25865777_01_000008/pyspark.zip/pyspark/worker.py", line 830, in main
process()
File "<MASKED>/application_1682476586273_25865777/container_e143_1682476586273_25865777_01_000008/pyspark.zip/pyspark/worker.py", line 820, in process
out_iter = func(split_index, iterator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/spark/python/pyspark/rdd.py", line 5405, in pipeline_func
File "/root/spark/python/pyspark/rdd.py", line 828, in func
File "/opt/conda/lib/python3.11/site-packages/datasets/packaged_modules/spark/spark.py", line 130, in create_cache_and_write_probe
open(probe_file, "a")
File "/opt/conda/lib/python3.11/site-packages/datasets/streaming.py", line 74, in wrapper
return function(*args, download_config=download_config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/datasets/download/streaming_download_manager.py", line 496, in xopen
file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/fsspec/core.py", line 439, in open
out = open_files(
^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/fsspec/core.py", line 282, in open_files
fs, fs_token, paths = get_fs_token_paths(
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/fsspec/core.py", line 609, in get_fs_token_paths
fs = filesystem(protocol, **inkwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/fsspec/registry.py", line 267, in filesystem
return cls(**storage_options)
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/fsspec/spec.py", line 79, in __call__
obj = super().__call__(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/fsspec/implementations/arrow.py", line 278, in __init__
fs = HadoopFileSystem(
^^^^^^^^^^^^^^^^^
File "pyarrow/_hdfs.pyx", line 96, in pyarrow._hdfs.HadoopFileSystem.__init__
File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 115, in pyarrow.lib.check_status
OSError: HDFS connection failed
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:561)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:767)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:749)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:514)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator.foreach(Iterator.scala:943)
at scala.collection.Iterator.foreach$(Iterator.scala:943)
at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
at scala.collection.TraversableOnce.to(TraversableOnce.scala:366)
at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364)
at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:358)
at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:358)
at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:345)
at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339)
at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28)
at org.apache.spark.rdd.RDD.$anonfun$collect$2(RDD.scala:1019)
at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2303)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:92)
at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161)
at org.apache.spark.scheduler.Task.run(Task.scala:139)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1529)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
```
### Steps to reproduce the bug
Use `from_spark()` function in pyspark YARN setting. I set `cache_dir` to HDFS path.
### Expected behavior
Work as described in document
### Environment info
- `datasets` version: 2.14.4
- Platform: Linux-4.18.0-425.19.2.el8_7.x86_64-x86_64-with-glibc2.17
- Python version: 3.11.4
- Huggingface_hub version: 0.16.4
- PyArrow version: 10.0.1
- Pandas version: 1.5.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6137/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6137/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6136 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6136/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6136/comments | https://api.github.com/repos/huggingface/datasets/issues/6136/events | https://github.com/huggingface/datasets/issues/6136 | 1,844,887,866 | I_kwDODunzps5t9sE6 | 6,136 | CI check_code_quality error: E721 Do not compare types, use `isinstance()` | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4296013012,
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance",
"name": "maintenance",
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | 0 | 2023-08-10T10:19:50 | 2023-08-10T11:22:58 | 2023-08-10T11:22:58 | MEMBER | null | After latest release of `ruff` (https://pypi.org/project/ruff/0.0.284/), we get the following CI error:
```
src/datasets/utils/py_utils.py:689:12: E721 Do not compare types, use `isinstance()`
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6136/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6136/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6135 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6135/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6135/comments | https://api.github.com/repos/huggingface/datasets/issues/6135/events | https://github.com/huggingface/datasets/pull/6135 | 1,844,870,943 | PR_kwDODunzps5Xn2AT | 6,135 | Remove unused allowed_extensions param | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-08-10T10:09:54 | 2023-08-10T12:08:38 | 2023-08-10T12:00:02 | MEMBER | null | This PR removes unused `allowed_extensions` parameter from `create_builder_configs_from_metadata_configs`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6135/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6135/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6135",
"html_url": "https://github.com/huggingface/datasets/pull/6135",
"diff_url": "https://github.com/huggingface/datasets/pull/6135.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6135.patch",
"merged_at": "2023-08-10T12:00:01"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6134 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6134/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6134/comments | https://api.github.com/repos/huggingface/datasets/issues/6134/events | https://github.com/huggingface/datasets/issues/6134 | 1,844,535,142 | I_kwDODunzps5t8V9m | 6,134 | `datasets` cannot be installed alongside `apache-beam` | {
"login": "boyleconnor",
"id": 6520892,
"node_id": "MDQ6VXNlcjY1MjA4OTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6520892?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/boyleconnor",
"html_url": "https://github.com/boyleconnor",
"followers_url": "https://api.github.com/users/boyleconnor/followers",
"following_url": "https://api.github.com/users/boyleconnor/following{/other_user}",
"gists_url": "https://api.github.com/users/boyleconnor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/boyleconnor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/boyleconnor/subscriptions",
"organizations_url": "https://api.github.com/users/boyleconnor/orgs",
"repos_url": "https://api.github.com/users/boyleconnor/repos",
"events_url": "https://api.github.com/users/boyleconnor/events{/privacy}",
"received_events_url": "https://api.github.com/users/boyleconnor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-08-10T06:54:32 | 2023-09-01T03:19:49 | 2023-08-10T15:22:10 | NONE | null | ### Describe the bug
If one installs `apache-beam` alongside `datasets` (which is required for the [wikipedia](https://huggingface.co/datasets/wikipedia#dataset-summary) dataset) in certain environments (such as a Google Colab notebook), they appear to install successfully, however, actually trying to do something such as importing the `load_dataset` method from `datasets` results in a crashing error.
I think the problem is that `apache-beam` version 2.49.0 requires `dill>=0.3.1.1,<0.3.2`, but the latest version of `multiprocess` (0.70.15) (on which `datasets` depends) requires `dill>=0.3.7,`, so this is causing the dependency resolver to use an older version of `multiprocess` which leads to the `datasets` crashing since it doesn't actually appear to be compatible with older versions.
### Steps to reproduce the bug
See this [Google Colab notebook](https://colab.research.google.com/drive/1PTeGlshamFcJZix_GiS3vMXX_YzAhGv0?usp=sharing) to easily reproduce the bug.
In some environments, I have been able to reproduce the bug by running the following in Bash:
```bash
$ pip install datasets apache-beam
```
then the following in a Python shell:
```python
from datasets import load_dataset
```
Here is my stacktrace from running on Google Colab:
<details>
<summary>stacktrace</summary>
```
[/usr/local/lib/python3.10/dist-packages/datasets/__init__.py](https://localhost:8080/#) in <module>
20 __version__ = "2.14.4"
21
---> 22 from .arrow_dataset import Dataset
23 from .arrow_reader import ReadInstruction
24 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder
[/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in <module>
64
65 from . import config
---> 66 from .arrow_reader import ArrowReader
67 from .arrow_writer import ArrowWriter, OptimizedTypedSequence
68 from .data_files import sanitize_patterns
[/usr/local/lib/python3.10/dist-packages/datasets/arrow_reader.py](https://localhost:8080/#) in <module>
28 import pyarrow.parquet as pq
29
---> 30 from .download.download_config import DownloadConfig
31 from .naming import _split_re, filenames_for_dataset_split
32 from .table import InMemoryTable, MemoryMappedTable, Table, concat_tables
[/usr/local/lib/python3.10/dist-packages/datasets/download/__init__.py](https://localhost:8080/#) in <module>
7
8 from .download_config import DownloadConfig
----> 9 from .download_manager import DownloadManager, DownloadMode
10 from .streaming_download_manager import StreamingDownloadManager
[/usr/local/lib/python3.10/dist-packages/datasets/download/download_manager.py](https://localhost:8080/#) in <module>
33 from ..utils.info_utils import get_size_checksum_dict
34 from ..utils.logging import get_logger, is_progress_bar_enabled, tqdm
---> 35 from ..utils.py_utils import NestedDataStructure, map_nested, size_str
36 from .download_config import DownloadConfig
37
[/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py](https://localhost:8080/#) in <module>
38 import dill
39 import multiprocess
---> 40 import multiprocess.pool
41 import numpy as np
42 from packaging import version
[/usr/local/lib/python3.10/dist-packages/multiprocess/pool.py](https://localhost:8080/#) in <module>
607 #
608
--> 609 class ThreadPool(Pool):
610
611 from .dummy import Process
[/usr/local/lib/python3.10/dist-packages/multiprocess/pool.py](https://localhost:8080/#) in ThreadPool()
609 class ThreadPool(Pool):
610
--> 611 from .dummy import Process
612
613 def __init__(self, processes=None, initializer=None, initargs=()):
[/usr/local/lib/python3.10/dist-packages/multiprocess/dummy/__init__.py](https://localhost:8080/#) in <module>
85 #
86
---> 87 class Condition(threading._Condition):
88 # XXX
89 if sys.version_info < (3, 0):
AttributeError: module 'threading' has no attribute '_Condition'
```
</details>
I've also found that attempting to install these `datasets` and `apache-beam` in certain environments (e.g. via pip inside a conda env) simply causes pip to hang indefinitely.
### Expected behavior
I would expect to be able to import methods from `datasets` without crashing. I have tested that this is possible as long as I do not attempt to install `apache-beam`.
### Environment info
Google Colab | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6134/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6134/timeline | null | not_planned | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6133 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6133/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6133/comments | https://api.github.com/repos/huggingface/datasets/issues/6133/events | https://github.com/huggingface/datasets/issues/6133 | 1,844,511,519 | I_kwDODunzps5t8QMf | 6,133 | Dataset is slower after calling `to_iterable_dataset` | {
"login": "npuichigo",
"id": 11533479,
"node_id": "MDQ6VXNlcjExNTMzNDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/npuichigo",
"html_url": "https://github.com/npuichigo",
"followers_url": "https://api.github.com/users/npuichigo/followers",
"following_url": "https://api.github.com/users/npuichigo/following{/other_user}",
"gists_url": "https://api.github.com/users/npuichigo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/npuichigo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/npuichigo/subscriptions",
"organizations_url": "https://api.github.com/users/npuichigo/orgs",
"repos_url": "https://api.github.com/users/npuichigo/repos",
"events_url": "https://api.github.com/users/npuichigo/events{/privacy}",
"received_events_url": "https://api.github.com/users/npuichigo/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2023-08-10T06:36:23 | 2023-08-16T09:18:54 | null | CONTRIBUTOR | null | ### Describe the bug
Can anyone explain why looping over a dataset becomes slower after calling `to_iterable_dataset` to convert to `IterableDataset`
### Steps to reproduce the bug
Any dataset after converting to `IterableDataset`
### Expected behavior
Maybe it should be faster on big dataset? I only test on small dataset
### Environment info
- `datasets` version: 2.14.4
- Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.17
- Python version: 3.8.15
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 1.5.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6133/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6133/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6132 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6132/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6132/comments | https://api.github.com/repos/huggingface/datasets/issues/6132/events | https://github.com/huggingface/datasets/issues/6132 | 1,843,491,020 | I_kwDODunzps5t4XDM | 6,132 | to_iterable_dataset is missing in document | {
"login": "npuichigo",
"id": 11533479,
"node_id": "MDQ6VXNlcjExNTMzNDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/npuichigo",
"html_url": "https://github.com/npuichigo",
"followers_url": "https://api.github.com/users/npuichigo/followers",
"following_url": "https://api.github.com/users/npuichigo/following{/other_user}",
"gists_url": "https://api.github.com/users/npuichigo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/npuichigo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/npuichigo/subscriptions",
"organizations_url": "https://api.github.com/users/npuichigo/orgs",
"repos_url": "https://api.github.com/users/npuichigo/repos",
"events_url": "https://api.github.com/users/npuichigo/events{/privacy}",
"received_events_url": "https://api.github.com/users/npuichigo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-08-09T15:15:03 | 2023-08-16T04:43:36 | 2023-08-16T04:43:29 | CONTRIBUTOR | null | ### Describe the bug
to_iterable_dataset is missing in document
### Steps to reproduce the bug
to_iterable_dataset is missing in document
### Expected behavior
document enhancement
### Environment info
unrelated | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6132/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6132/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6130 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6130/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6130/comments | https://api.github.com/repos/huggingface/datasets/issues/6130/events | https://github.com/huggingface/datasets/issues/6130 | 1,843,158,846 | I_kwDODunzps5t3F8- | 6,130 | default config name doesn't work when config kwargs are specified. | {
"login": "npuichigo",
"id": 11533479,
"node_id": "MDQ6VXNlcjExNTMzNDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/npuichigo",
"html_url": "https://github.com/npuichigo",
"followers_url": "https://api.github.com/users/npuichigo/followers",
"following_url": "https://api.github.com/users/npuichigo/following{/other_user}",
"gists_url": "https://api.github.com/users/npuichigo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/npuichigo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/npuichigo/subscriptions",
"organizations_url": "https://api.github.com/users/npuichigo/orgs",
"repos_url": "https://api.github.com/users/npuichigo/repos",
"events_url": "https://api.github.com/users/npuichigo/events{/privacy}",
"received_events_url": "https://api.github.com/users/npuichigo/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 15 | 2023-08-09T12:43:15 | 2023-08-22T10:03:41 | null | CONTRIBUTOR | null | ### Describe the bug
https://github.com/huggingface/datasets/blob/12cfc1196e62847e2e8239fbd727a02cbc86ddec/src/datasets/builder.py#L518-L522
If `config_name` is `None`, `DEFAULT_CONFIG_NAME` should be select. But once users pass `config_kwargs` to their customized `BuilderConfig`, the logic is ignored, and dataset cannot select the default config from multiple configs.
### Steps to reproduce the bug
```python
import datasets
datasets.load_dataset('/dataset/with/multiple/config'') # Ok
datasets.load_dataset('/dataset/with/multiple/config', some_field_in_config='some') # Err
```
### Expected behavior
Default config behavior should be consistent.
### Environment info
- `datasets` version: 2.14.3
- Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.17
- Python version: 3.8.15
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 1.5.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6130/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6130/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6129 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6129/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6129/comments | https://api.github.com/repos/huggingface/datasets/issues/6129/events | https://github.com/huggingface/datasets/pull/6129 | 1,841,563,517 | PR_kwDODunzps5Xcmuw | 6,129 | Release 2.14.4 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 5 | 2023-08-08T15:43:56 | 2023-08-08T16:08:22 | 2023-08-08T15:49:06 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6129/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6129/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6129",
"html_url": "https://github.com/huggingface/datasets/pull/6129",
"diff_url": "https://github.com/huggingface/datasets/pull/6129.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6129.patch",
"merged_at": "2023-08-08T15:49:06"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6128 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6128/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6128/comments | https://api.github.com/repos/huggingface/datasets/issues/6128/events | https://github.com/huggingface/datasets/issues/6128 | 1,841,545,493 | I_kwDODunzps5tw8EV | 6,128 | IndexError: Invalid key: 88 is out of bounds for size 0 | {
"login": "TomasAndersonFang",
"id": 38727343,
"node_id": "MDQ6VXNlcjM4NzI3MzQz",
"avatar_url": "https://avatars.githubusercontent.com/u/38727343?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TomasAndersonFang",
"html_url": "https://github.com/TomasAndersonFang",
"followers_url": "https://api.github.com/users/TomasAndersonFang/followers",
"following_url": "https://api.github.com/users/TomasAndersonFang/following{/other_user}",
"gists_url": "https://api.github.com/users/TomasAndersonFang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TomasAndersonFang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TomasAndersonFang/subscriptions",
"organizations_url": "https://api.github.com/users/TomasAndersonFang/orgs",
"repos_url": "https://api.github.com/users/TomasAndersonFang/repos",
"events_url": "https://api.github.com/users/TomasAndersonFang/events{/privacy}",
"received_events_url": "https://api.github.com/users/TomasAndersonFang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-08-08T15:32:08 | 2023-08-11T13:35:09 | 2023-08-11T13:35:09 | NONE | null | ### Describe the bug
This bug generates when I use torch.compile(model) in my code, which seems to raise an error in datasets lib.
### Steps to reproduce the bug
I use the following code to fine-tune Falcon on my private dataset.
```python
import transformers
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
AutoConfig,
DataCollatorForSeq2Seq,
Trainer,
Seq2SeqTrainer,
HfArgumentParser,
Seq2SeqTrainingArguments,
BitsAndBytesConfig,
)
from peft import (
LoraConfig,
get_peft_model,
get_peft_model_state_dict,
prepare_model_for_int8_training,
set_peft_model_state_dict,
)
import torch
import os
import evaluate
import functools
from datasets import load_dataset
import bitsandbytes as bnb
import logging
import json
import copy
from typing import Dict, Optional, Sequence
from dataclasses import dataclass, field
# Lora settings
LORA_R = 8
LORA_ALPHA = 16
LORA_DROPOUT= 0.05
LORA_TARGET_MODULES = ["query_key_value"]
@dataclass
class ModelArguments:
model_name_or_path: Optional[str] = field(default="Salesforce/codegen2-7B")
@dataclass
class DataArguments:
data_path: str = field(default=None, metadata={"help": "Path to the training data."})
train_file: str = field(default=None, metadata={"help": "Path to the evaluation data."})
eval_file: str = field(default=None, metadata={"help": "Path to the evaluation data."})
cache_path: str = field(default=None, metadata={"help": "Path to the cache directory."})
num_proc: int = field(default=4, metadata={"help": "Number of processes to use for data preprocessing."})
@dataclass
class TrainingArguments(transformers.TrainingArguments):
# cache_dir: Optional[str] = field(default=None)
optim: str = field(default="adamw_torch")
model_max_length: int = field(
default=512,
metadata={"help": "Maximum sequence length. Sequences will be right padded (and possibly truncated)."},
)
is_lora: bool = field(default=True, metadata={"help": "Whether to use LORA."})
def tokenize(text, tokenizer, max_seq_len=512, add_eos_token=True):
result = tokenizer(
text,
truncation=True,
max_length=max_seq_len,
padding=False,
return_tensors=None,
)
if (
result["input_ids"][-1] != tokenizer.eos_token_id
and len(result["input_ids"]) < max_seq_len
and add_eos_token
):
result["input_ids"].append(tokenizer.eos_token_id)
result["attention_mask"].append(1)
if add_eos_token and len(result["input_ids"]) >= max_seq_len:
result["input_ids"][max_seq_len - 1] = tokenizer.eos_token_id
result["attention_mask"][max_seq_len - 1] = 1
result["labels"] = result["input_ids"].copy()
return result
def main():
parser = HfArgumentParser((ModelArguments, DataArguments, TrainingArguments))
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
config = AutoConfig.from_pretrained(
model_args.model_name_or_path,
cache_dir=data_args.cache_path,
trust_remote_code=True,
)
if training_args.is_lora:
model = AutoModelForCausalLM.from_pretrained(
model_args.model_name_or_path,
cache_dir=data_args.cache_path,
torch_dtype=torch.float16,
trust_remote_code=True,
load_in_8bit=True,
quantization_config=BitsAndBytesConfig(
load_in_8bit=True,
llm_int8_threshold=6.0
),
)
model = prepare_model_for_int8_training(model)
config = LoraConfig(
r=LORA_R,
lora_alpha=LORA_ALPHA,
target_modules=LORA_TARGET_MODULES,
lora_dropout=LORA_DROPOUT,
bias="none",
task_type="CAUSAL_LM",
)
model = get_peft_model(model, config)
else:
model = AutoModelForCausalLM.from_pretrained(
model_args.model_name_or_path,
torch_dtype=torch.float16,
cache_dir=data_args.cache_path,
trust_remote_code=True,
)
model.config.use_cache = False
def print_trainable_parameters(model):
"""
Prints the number of trainable parameters in the model.
"""
trainable_params = 0
all_param = 0
for _, param in model.named_parameters():
all_param += param.numel()
if param.requires_grad:
trainable_params += param.numel()
print(
f"trainable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params / all_param}"
)
print_trainable_parameters(model)
tokenizer = AutoTokenizer.from_pretrained(
model_args.model_name_or_path,
cache_dir=data_args.cache_path,
model_max_length=training_args.model_max_length,
padding_side="left",
use_fast=True,
trust_remote_code=True,
)
tokenizer.pad_token = tokenizer.eos_token
# Load dataset
def generate_and_tokenize_prompt(sample):
input_text = sample["input"]
target_text = sample["output"] + tokenizer.eos_token
full_text = input_text + target_text
tokenized_full_text = tokenize(full_text, tokenizer, max_seq_len=512)
tokenized_input_text = tokenize(input_text, tokenizer, max_seq_len=512)
input_len = len(tokenized_input_text["input_ids"]) - 1 # -1 for eos token
tokenized_full_text["labels"] = [-100] * input_len + tokenized_full_text["labels"][input_len:]
return tokenized_full_text
data_files = {}
if data_args.train_file is not None:
data_files["train"] = data_args.train_file
if data_args.eval_file is not None:
data_files["eval"] = data_args.eval_file
dataset = load_dataset(data_args.data_path, data_files=data_files)
train_dataset = dataset["train"]
eval_dataset = dataset["eval"]
train_dataset = train_dataset.map(generate_and_tokenize_prompt, num_proc=data_args.num_proc)
eval_dataset = eval_dataset.map(generate_and_tokenize_prompt, num_proc=data_args.num_proc)
data_collator = DataCollatorForSeq2Seq(tokenizer, pad_to_multiple_of=8, return_tensors="pt", padding=True)
# Evaluation metrics
def compute_metrics(eval_preds, tokenizer):
metric = evaluate.load('exact_match')
preds, labels = eval_preds
# In case the model returns more than the prediction logits
if isinstance(preds, tuple):
preds = preds[0]
decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True, clean_up_tokenization_spaces=False)
# Replace -100s in the labels as we can't decode them
labels[labels == -100] = tokenizer.pad_token_id
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True, clean_up_tokenization_spaces=False)
# Some simple post-processing
decoded_preds = [pred.strip() for pred in decoded_preds]
decoded_labels = [label.strip() for label in decoded_labels]
result = metric.compute(predictions=decoded_preds, references=decoded_labels)
return {'exact_match': result['exact_match']}
compute_metrics_fn = functools.partial(compute_metrics, tokenizer=tokenizer)
model = torch.compile(model)
# Training
trainer = Trainer(
model=model,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
args=training_args,
data_collator=data_collator,
compute_metrics=compute_metrics_fn,
)
trainer.train()
trainer.save_state()
trainer.save_model(output_dir=training_args.output_dir)
tokenizer.save_pretrained(save_directory=training_args.output_dir)
if __name__ == "__main__":
main()
```
When I didn't use `torch.cpmpile(model)`, my code worked well. But when I added this line to my code, It produced the following error:
```
Traceback (most recent call last):
File "falcon_sft.py", line 230, in <module>
main()
File "falcon_sft.py", line 223, in main
trainer.train()
File "python3.10/site-packages/transformers/trainer.py", line 1539, in train
return inner_training_loop(
File "python3.10/site-packages/transformers/trainer.py", line 1787, in _inner_training_loop
for step, inputs in enumerate(epoch_iterator):
File "python3.10/site-packages/accelerate/data_loader.py", line 384, in __iter__
current_batch = next(dataloader_iter)
File "python3.10/site-packages/torch/utils/data/dataloader.py", line 633, in __next__
data = self._next_data()
File "python3.10/site-packages/torch/utils/data/dataloader.py", line 677, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
data = self.dataset.__getitems__(possibly_batched_index)
File "python3.10/site-packages/datasets/arrow_dataset.py", line 2807, in __getitems__
batch = self.__getitem__(keys)
File "python3.10/site-packages/datasets/arrow_dataset.py", line 2803, in __getitem__
return self._getitem(key)
File "python3.10/site-packages/datasets/arrow_dataset.py", line 2787, in _getitem
pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
File "python3.10/site-packages/datasets/formatting/formatting.py", line 583, in query_table
_check_valid_index_key(key, size)
File "python3.10/site-packages/datasets/formatting/formatting.py", line 536, in _check_valid_index_key
_check_valid_index_key(int(max(key)), size=size)
File "python3.10/site-packages/datasets/formatting/formatting.py", line 526, in _check_valid_index_key
raise IndexError(f"Invalid key: {key} is out of bounds for size {size}")
IndexError: Invalid key: 88 is out of bounds for size 0
```
So I'm confused about why this error was generated, and how to fix it. Is this error produced by datasets or `torch.compile`?
### Expected behavior
I want to use `torch.compile` in my code.
### Environment info
- `datasets` version: 2.14.3
- Platform: Linux-4.18.0-425.19.2.el8_7.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.8
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 2.0.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6128/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6128/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6127 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6127/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6127/comments | https://api.github.com/repos/huggingface/datasets/issues/6127/events | https://github.com/huggingface/datasets/pull/6127 | 1,839,746,721 | PR_kwDODunzps5XWdP5 | 6,127 | Fix authentication issues | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 8 | 2023-08-07T15:41:25 | 2023-08-08T15:24:59 | 2023-08-08T15:16:22 | MEMBER | null | This PR fixes 3 authentication issues:
- Fix authentication when passing `token`.
- Fix authentication in `Audio.decode_example` and `Image.decode_example`.
- Fix authentication to resolve `data_files` in repositories without script.
This PR also fixes our CI so that we properly test when passing `token` and we do not use the token stored in `HfFolder`.
Fix #6126.
## Details
### Fix authentication when passing `token`
See c0a77dc943de68a17f23f141517028c734c78623
The root issue was caused when the `token` was set in an already instantiated `DownloadConfig` and thus not propagated to `self._storage_options`:
```python
download_config.token = token
```
As this usage pattern is very common, the fix consists in overriding `DownloadConfig.__setattr__`.
This fixes authentication issues in the following functions:
- `load_dataset` and `load_dataset_builder`
- `Dataset.push_to_hub` and `Dataset.push_to_hub`
- `inspect.get_dataset_config_info`, `inspect.get_dataset_infos` and `inspect.get_dataset_split_names`
### Fix authentication in `Audio.decode_example` and `Image.decode_example`.
See: 58e62af004b6b8b84dcfd897a4bc71637cfa6c3f
The `token` was not set because the `repo_id` was wrongly tried to be parsed from an HTTP URL (`"http://..."`), instead of an HFFileSystem URL (`"hf://"`)
### Fix authentication to resolve `data_files` in repositories without script
See: e4684fc1032321abf0d494b0c130ea7c82ebda80
This is fixed by passing `download_config` to the function `create_builder_configs_from_metadata_configs` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6127/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6127/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6127",
"html_url": "https://github.com/huggingface/datasets/pull/6127",
"diff_url": "https://github.com/huggingface/datasets/pull/6127.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6127.patch",
"merged_at": "2023-08-08T15:16:22"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6126 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6126/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6126/comments | https://api.github.com/repos/huggingface/datasets/issues/6126/events | https://github.com/huggingface/datasets/issues/6126 | 1,839,675,320 | I_kwDODunzps5tpze4 | 6,126 | Private datasets do not load when passing token | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | 4 | 2023-08-07T15:06:47 | 2023-08-08T15:16:23 | 2023-08-08T15:16:23 | MEMBER | null | ### Describe the bug
Since the release of `datasets` 2.14, private/gated datasets do not load when passing `token`: they raise `EmptyDatasetError`.
This is a non-planned backward incompatible breaking change.
Note that private datasets do load if instead `download_config` is passed:
```python
from datasets import DownloadConfig, load_dataset
ds = load_dataset("albertvillanova/tmp-private", split="train", download_config=DownloadConfig(token="<MY-TOKEN>"))
ds
```
gives
```
Dataset({
features: ['text'],
num_rows: 4
})
```
### Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset("albertvillanova/tmp-private", split="train", token="<MY-TOKEN>")
```
gives
```
---------------------------------------------------------------------------
EmptyDatasetError Traceback (most recent call last)
[<ipython-input-2-25b48732107a>](https://localhost:8080/#) in <cell line: 3>()
1 from datasets import load_dataset
2
----> 3 ds = load_dataset("albertvillanova/tmp-private", split="train", token="<MY-TOKEN>")
5 frames
[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)
2107
2108 # Create a dataset builder
-> 2109 builder_instance = load_dataset_builder(
2110 path=path,
2111 name=name,
[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, use_auth_token, storage_options, **config_kwargs)
1793 download_config = download_config.copy() if download_config else DownloadConfig()
1794 download_config.storage_options.update(storage_options)
-> 1795 dataset_module = dataset_module_factory(
1796 path,
1797 revision=revision,
[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1484 raise ConnectionError(f"Couldn't reach the Hugging Face Hub for dataset '{path}': {e1}") from None
1485 if isinstance(e1, EmptyDatasetError):
-> 1486 raise e1 from None
1487 if isinstance(e1, FileNotFoundError):
1488 raise FileNotFoundError(
[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1474 download_config=download_config,
1475 download_mode=download_mode,
-> 1476 ).get_module()
1477 except (
1478 Exception
[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in get_module(self)
1030 sanitize_patterns(self.data_files)
1031 if self.data_files is not None
-> 1032 else get_data_patterns(base_path, download_config=self.download_config)
1033 )
1034 data_files = DataFilesDict.from_patterns(
[/usr/local/lib/python3.10/dist-packages/datasets/data_files.py](https://localhost:8080/#) in get_data_patterns(base_path, download_config)
457 return _get_data_files_patterns(resolver)
458 except FileNotFoundError:
--> 459 raise EmptyDatasetError(f"The directory at {base_path} doesn't contain any data files") from None
460
461
EmptyDatasetError: The directory at hf://datasets/albertvillanova/tmp-private@79b9e4fe79670a9a050d6ebc385464891915a71d doesn't contain any data files
```
### Expected behavior
The dataset should load.
### Environment info
- `datasets` version: 2.14.3
- Platform: Linux-5.15.109+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- PyArrow version: 9.0.0
- Pandas version: 1.5.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6126/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6126/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6125 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6125/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6125/comments | https://api.github.com/repos/huggingface/datasets/issues/6125/events | https://github.com/huggingface/datasets/issues/6125 | 1,837,980,986 | I_kwDODunzps5tjV06 | 6,125 | Reinforcement Learning and Robotics are not task categories in HF datasets metadata | {
"login": "StoneT2000",
"id": 35373228,
"node_id": "MDQ6VXNlcjM1MzczMjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/35373228?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/StoneT2000",
"html_url": "https://github.com/StoneT2000",
"followers_url": "https://api.github.com/users/StoneT2000/followers",
"following_url": "https://api.github.com/users/StoneT2000/following{/other_user}",
"gists_url": "https://api.github.com/users/StoneT2000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/StoneT2000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StoneT2000/subscriptions",
"organizations_url": "https://api.github.com/users/StoneT2000/orgs",
"repos_url": "https://api.github.com/users/StoneT2000/repos",
"events_url": "https://api.github.com/users/StoneT2000/events{/privacy}",
"received_events_url": "https://api.github.com/users/StoneT2000/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-08-05T23:59:42 | 2023-08-18T12:28:42 | 2023-08-18T12:28:42 | NONE | null | ### Describe the bug
In https://huggingface.co/models there are task categories for RL and robotics but none in https://huggingface.co/datasets
Our lab is currently moving our datasets over to hugging face and would like to be able to add those 2 tags
Moreover we see some older datasets that do have that tag, but we can't seem to add it ourselves.
### Steps to reproduce the bug
1. Create a new dataset on Hugging face
2. Try to type reinforcemement-learning or robotics into the tasks categories, it does not allow you to commit
### Expected behavior
Expected to be able to add RL and robotics as task categories as some previous datasets have these tags
### Environment info
N/A | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6125/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6125/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6124 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6124/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6124/comments | https://api.github.com/repos/huggingface/datasets/issues/6124/events | https://github.com/huggingface/datasets/issues/6124 | 1,837,868,112 | I_kwDODunzps5ti6RQ | 6,124 | Datasets crashing runs due to KeyError | {
"login": "conceptofmind",
"id": 25208228,
"node_id": "MDQ6VXNlcjI1MjA4MjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/25208228?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/conceptofmind",
"html_url": "https://github.com/conceptofmind",
"followers_url": "https://api.github.com/users/conceptofmind/followers",
"following_url": "https://api.github.com/users/conceptofmind/following{/other_user}",
"gists_url": "https://api.github.com/users/conceptofmind/gists{/gist_id}",
"starred_url": "https://api.github.com/users/conceptofmind/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/conceptofmind/subscriptions",
"organizations_url": "https://api.github.com/users/conceptofmind/orgs",
"repos_url": "https://api.github.com/users/conceptofmind/repos",
"events_url": "https://api.github.com/users/conceptofmind/events{/privacy}",
"received_events_url": "https://api.github.com/users/conceptofmind/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 3 | 2023-08-05T17:48:56 | 2023-08-20T17:33:15 | null | NONE | null | ### Describe the bug
Hi all,
I have been running into a pretty persistent issue recently when trying to load datasets.
```python
train_dataset = load_dataset(
'llama-2-7b-tokenized',
split = 'train'
)
```
I receive a KeyError which crashes the runs.
```
Traceback (most recent call last):
main()
train_dataset = load_dataset(
^^^^^^^^^^^^^
builder_instance = load_dataset_builder(
^^^^^^^^^^^^^^^^^^^^^
dataset_module = dataset_module_factory(
^^^^^^^^^^^^^^^^^^^^^^^
raise e1 from None
).get_module()
^^^^^^^^^^^^
else get_data_patterns(base_path, download_config=self.download_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
return _get_data_files_patterns(resolver)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
data_files = pattern_resolver(pattern)
^^^^^^^^^^^^^^^^^^^^^^^^^
fs, _, _ = get_fs_token_paths(pattern, storage_options=storage_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
paths = [f for f in sorted(fs.glob(paths)) if not fs.isdir(f)]
^^^^^^^^^^^^^^
allpaths = self.find(root, maxdepth=depth, withdirs=True, detail=True, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
for _, dirs, files in self.walk(path, maxdepth, detail=True, **kwargs):
listing = self.ls(path, detail=True, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
"last_modified": parse_datetime(tree_item["lastCommit"]["date"]),
~~~~~~~~~^^^^^^^^^^^^^^
KeyError: 'lastCommit'
```
Any help would be greatly appreciated.
Thank you,
Enrico
### Steps to reproduce the bug
Load the dataset from the Huggingface hub.
```python
train_dataset = load_dataset(
'llama-2-7b-tokenized',
split = 'train'
)
```
### Expected behavior
Loads the dataset.
### Environment info
datasets-2.14.3
CUDA 11.8
Python 3.11 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6124/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6124/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6123 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6123/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6123/comments | https://api.github.com/repos/huggingface/datasets/issues/6123/events | https://github.com/huggingface/datasets/issues/6123 | 1,837,789,294 | I_kwDODunzps5tinBu | 6,123 | Inaccurate Bounding Boxes in "wildreceipt" Dataset | {
"login": "HamzaGbada",
"id": 50714796,
"node_id": "MDQ6VXNlcjUwNzE0Nzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/50714796?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HamzaGbada",
"html_url": "https://github.com/HamzaGbada",
"followers_url": "https://api.github.com/users/HamzaGbada/followers",
"following_url": "https://api.github.com/users/HamzaGbada/following{/other_user}",
"gists_url": "https://api.github.com/users/HamzaGbada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HamzaGbada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HamzaGbada/subscriptions",
"organizations_url": "https://api.github.com/users/HamzaGbada/orgs",
"repos_url": "https://api.github.com/users/HamzaGbada/repos",
"events_url": "https://api.github.com/users/HamzaGbada/events{/privacy}",
"received_events_url": "https://api.github.com/users/HamzaGbada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-08-05T14:34:13 | 2023-08-17T14:25:27 | 2023-08-17T14:25:26 | NONE | null | ### Describe the bug
I would like to bring to your attention an issue related to the accuracy of bounding boxes within the "wildreceipt" dataset, which is made available through the Hugging Face API. Specifically, I have identified a discrepancy between the bounding boxes generated by the dataset loading commands, namely `load_dataset("Theivaprakasham/wildreceipt")` and `load_dataset("jinhybr/WildReceipt")`, and the actual labels and corresponding bounding boxes present in the dataset.
To illustrate this divergence, I've provided two examples in the form of screenshots. These screenshots highlight the contrasting outcomes between my personal implementation of the dataloader and the implementation offered by Hugging Face:
**Example 1:**
![image](https://github.com/huggingface/datasets/assets/50714796/7a6604d2-899d-4102-a008-1a28c90698f1)
![image](https://github.com/huggingface/datasets/assets/50714796/eba458c7-d3af-4868-a520-8b683aa96f66)
![image](https://github.com/huggingface/datasets/assets/50714796/9f394891-5f5b-46f7-8e52-071b724aedab)
**Example 2:**
![image](https://github.com/huggingface/datasets/assets/50714796/a2b2a8d3-124e-4990-b64a-5133cf4be2fe)
![image](https://github.com/huggingface/datasets/assets/50714796/6ee25642-35aa-40ad-ac1e-899d33be90df)
![image](https://github.com/huggingface/datasets/assets/50714796/5e42ff91-9fc4-4520-8803-0e225656f96c)
It's important to note that my dataloader implementation is based on the same dataset files as utilized in the Hugging Face implementation. For your reference, you can access the dataset files through this link: [wildreceipt dataset files](https://download.openmmlab.com/mmocr/data/wildreceipt.tar).
This inconsistency in bounding box accuracy warrants investigation and rectification for maintaining the integrity of the "wildreceipt" dataset. Your attention and assistance in addressing this matter would be greatly appreciated.
### Steps to reproduce the bug
```python
import matplotlib.pyplot as plt
from datasets import load_dataset
# Define functions to convert bounding box formats
def convert_format1(box):
x, y, w, h = box
x2, y2 = x + w, y + h
return [x, y, x2, y2]
def convert_format2(box):
x1, y1, x2, y2 = box
return [x1, y1, x2, y2]
def plot_cropped_image(image, box, title):
cropped_image = image.crop(box)
plt.imshow(cropped_image)
plt.title(title)
plt.axis('off')
plt.savefig(title+'.png')
plt.show()
doc_index = 1
word_index = 3
dataset = load_dataset("Theivaprakasham/wildreceipt")['train']
bbox_hugging_face = dataset[doc_index]['bboxes'][word_index]
text_unit_face = dataset[doc_index]['words'][word_index]
common_box_hugface_1 = convert_format1(bbox_hugging_face)
common_box_hugface_2 = convert_format2(bbox_hugging_face)
plot_cropped_image(image_hugging, common_box_hugface_1,
f'Hugging Face Bouding boxes (x,y,w,h format) \n its associated text unit: {text_unit_face}')
plot_cropped_image(image_hugging, common_box_hugface_2,
f'Hugging Face Bouding boxes (x1,y1,x2, y2 format) \n its associated text unit: {text_unit_face}')
```
### Expected behavior
The bounding boxes generated by the "wildreceipt" dataset in HuggingFace implementation loading commands should accurately match the actual labels and bounding boxes of the dataset.
### Environment info
- Python version: 3.8
- Hugging Face datasets version: 2.14.2
- Dataset file taken from this link: https://download.openmmlab.com/mmocr/data/wildreceipt.tar | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6123/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6123/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6122 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6122/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6122/comments | https://api.github.com/repos/huggingface/datasets/issues/6122/events | https://github.com/huggingface/datasets/issues/6122 | 1,837,335,721 | I_kwDODunzps5tg4Sp | 6,122 | Upload README via `push_to_hub` | {
"login": "liyucheng09",
"id": 27999909,
"node_id": "MDQ6VXNlcjI3OTk5OTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/27999909?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liyucheng09",
"html_url": "https://github.com/liyucheng09",
"followers_url": "https://api.github.com/users/liyucheng09/followers",
"following_url": "https://api.github.com/users/liyucheng09/following{/other_user}",
"gists_url": "https://api.github.com/users/liyucheng09/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liyucheng09/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liyucheng09/subscriptions",
"organizations_url": "https://api.github.com/users/liyucheng09/orgs",
"repos_url": "https://api.github.com/users/liyucheng09/repos",
"events_url": "https://api.github.com/users/liyucheng09/events{/privacy}",
"received_events_url": "https://api.github.com/users/liyucheng09/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 2023-08-04T21:00:27 | 2023-08-21T18:18:54 | 2023-08-21T18:18:54 | NONE | null | ### Feature request
`push_to_hub` now allows users to upload datasets programmatically. However, based on the latest doc, we still need to open the dataset page to add readme file manually.
However, I do discover snippets to intialize a README for every `push_to_hub`:
```
dataset_card = (
DatasetCard(
"---\n"
+ str(dataset_card_data)
+ "\n---\n"
+ f'# Dataset Card for "{repo_id.split("/")[-1]}"\n\n[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)'
)
if dataset_card is None
else dataset_card
)
HfApi(endpoint=config.HF_ENDPOINT).upload_file(
path_or_fileobj=str(dataset_card).encode(),
path_in_repo="README.md",
repo_id=repo_id,
token=token,
repo_type="dataset",
revision=branch,
)
```
So, if we can enable `push_to_hub` to upload a readme file by ourselves instead of using the auto generated ones, it can save ton of time, and will definitely alleviate the current "lack-of-dataset-card" situation.
### Motivation
as elabrated above.
### Your contribution
I might be able to make a pr. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6122/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6122/timeline | null | not_planned | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6121 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6121/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6121/comments | https://api.github.com/repos/huggingface/datasets/issues/6121/events | https://github.com/huggingface/datasets/pull/6121 | 1,836,761,712 | PR_kwDODunzps5XMsWd | 6,121 | Small typo in the code example of create imagefolder dataset | {
"login": "WangXin93",
"id": 19688994,
"node_id": "MDQ6VXNlcjE5Njg4OTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/19688994?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WangXin93",
"html_url": "https://github.com/WangXin93",
"followers_url": "https://api.github.com/users/WangXin93/followers",
"following_url": "https://api.github.com/users/WangXin93/following{/other_user}",
"gists_url": "https://api.github.com/users/WangXin93/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WangXin93/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WangXin93/subscriptions",
"organizations_url": "https://api.github.com/users/WangXin93/orgs",
"repos_url": "https://api.github.com/users/WangXin93/repos",
"events_url": "https://api.github.com/users/WangXin93/events{/privacy}",
"received_events_url": "https://api.github.com/users/WangXin93/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-08-04T13:36:59 | 2023-08-04T13:45:32 | 2023-08-04T13:41:43 | NONE | null | Fix type of code example of load imagefolder dataset | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6121/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6121/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6121",
"html_url": "https://github.com/huggingface/datasets/pull/6121",
"diff_url": "https://github.com/huggingface/datasets/pull/6121.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6121.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6120 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6120/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6120/comments | https://api.github.com/repos/huggingface/datasets/issues/6120/events | https://github.com/huggingface/datasets/issues/6120 | 1,836,026,938 | I_kwDODunzps5tb4w6 | 6,120 | Lookahead streaming support? | {
"login": "PicoCreator",
"id": 17175484,
"node_id": "MDQ6VXNlcjE3MTc1NDg0",
"avatar_url": "https://avatars.githubusercontent.com/u/17175484?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PicoCreator",
"html_url": "https://github.com/PicoCreator",
"followers_url": "https://api.github.com/users/PicoCreator/followers",
"following_url": "https://api.github.com/users/PicoCreator/following{/other_user}",
"gists_url": "https://api.github.com/users/PicoCreator/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PicoCreator/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PicoCreator/subscriptions",
"organizations_url": "https://api.github.com/users/PicoCreator/orgs",
"repos_url": "https://api.github.com/users/PicoCreator/repos",
"events_url": "https://api.github.com/users/PicoCreator/events{/privacy}",
"received_events_url": "https://api.github.com/users/PicoCreator/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 1 | 2023-08-04T04:01:52 | 2023-08-17T17:48:42 | null | NONE | null | ### Feature request
From what I understand, streaming dataset currently pulls the data, and process the data as it is requested.
This can introduce significant latency delays when data is loaded into the training process, needing to wait for each segment.
While the delays might be dataset specific (or even mapping instruction/tokenizer specific)
Is it possible to introduce a `streaming_lookahead` parameter, which is used for predictable workloads (even shuffled dataset with fixed seed). As we can predict in advance what the next few datasamples will be. And fetch them while the current set is being trained.
With enough CPU & bandwidth to keep up with the training process, and a sufficiently large lookahead, this will reduce the various latency involved while waiting for the dataset to be ready between batches.
### Motivation
Faster streaming performance, while training over extra large TB sized datasets
### Your contribution
I currently use HF dataset, with pytorch lightning trainer for RWKV project, and would be able to help test this feature if supported. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6120/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6120/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6119 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6119/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6119/comments | https://api.github.com/repos/huggingface/datasets/issues/6119/events | https://github.com/huggingface/datasets/pull/6119 | 1,835,996,350 | PR_kwDODunzps5XKI19 | 6,119 | [Docs] Add description of `select_columns` to guide | {
"login": "unifyh",
"id": 18213435,
"node_id": "MDQ6VXNlcjE4MjEzNDM1",
"avatar_url": "https://avatars.githubusercontent.com/u/18213435?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/unifyh",
"html_url": "https://github.com/unifyh",
"followers_url": "https://api.github.com/users/unifyh/followers",
"following_url": "https://api.github.com/users/unifyh/following{/other_user}",
"gists_url": "https://api.github.com/users/unifyh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/unifyh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/unifyh/subscriptions",
"organizations_url": "https://api.github.com/users/unifyh/orgs",
"repos_url": "https://api.github.com/users/unifyh/repos",
"events_url": "https://api.github.com/users/unifyh/events{/privacy}",
"received_events_url": "https://api.github.com/users/unifyh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-08-04T03:13:30 | 2023-08-16T10:13:02 | 2023-08-16T10:02:52 | CONTRIBUTOR | null | Closes #6116 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6119/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6119/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6119",
"html_url": "https://github.com/huggingface/datasets/pull/6119",
"diff_url": "https://github.com/huggingface/datasets/pull/6119.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6119.patch",
"merged_at": "2023-08-16T10:02:52"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6118 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6118/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6118/comments | https://api.github.com/repos/huggingface/datasets/issues/6118/events | https://github.com/huggingface/datasets/issues/6118 | 1,835,940,417 | I_kwDODunzps5tbjpB | 6,118 | IterableDataset.from_generator() fails with pickle error when provided a generator or iterator | {
"login": "finkga",
"id": 1281051,
"node_id": "MDQ6VXNlcjEyODEwNTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1281051?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/finkga",
"html_url": "https://github.com/finkga",
"followers_url": "https://api.github.com/users/finkga/followers",
"following_url": "https://api.github.com/users/finkga/following{/other_user}",
"gists_url": "https://api.github.com/users/finkga/gists{/gist_id}",
"starred_url": "https://api.github.com/users/finkga/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/finkga/subscriptions",
"organizations_url": "https://api.github.com/users/finkga/orgs",
"repos_url": "https://api.github.com/users/finkga/repos",
"events_url": "https://api.github.com/users/finkga/events{/privacy}",
"received_events_url": "https://api.github.com/users/finkga/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2023-08-04T01:45:04 | 2023-08-17T17:58:27 | null | NONE | null | ### Describe the bug
**Description**
Providing a generator in an instantiation of IterableDataset.from_generator() fails with `TypeError: cannot pickle 'generator' object` when the generator argument is supplied with a generator.
**Code example**
```
def line_generator(files: List[Path]):
if isinstance(files, str):
files = [Path(files)]
for file in files:
if isinstance(file, str):
file = Path(file)
yield from open(file,'r').readlines()
...
model_training_files = ['file1.txt', 'file2.txt', 'file3.txt']
train_dataset = IterableDataset.from_generator(generator=line_generator(model_training_files))
```
**Traceback**
Traceback (most recent call last):
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/contextlib.py", line 135, in __exit__
self.gen.throw(type, value, traceback)
File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 691, in _no_cache_fields
yield
File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 701, in dumps
dump(obj, file)
File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 676, in dump
Pickler(file, recurse=True).dump(obj)
File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/dill/_dill.py", line 394, in dump
StockPickler.dump(self, obj)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/pickle.py", line 487, in dump
self.save(obj)
File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 666, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/dill/_dill.py", line 1186, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/pickle.py", line 971, in save_dict
self._batch_setitems(obj.items())
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/pickle.py", line 997, in _batch_setitems
save(v)
File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 666, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/pickle.py", line 578, in save
rv = reduce(self.proto)
TypeError: cannot pickle 'generator' object
### Steps to reproduce the bug
1. Create a set of text files to iterate over.
2. Create a generator that returns the lines in each file until all files are exhausted.
3. Instantiate the dataset over the generator by instantiating an IterableDataset.from_generator().
4. Wait for the explosion.
### Expected behavior
I would expect that since the function claims to accept a generator that there would be no crash. Instead, I would expect the dataset to return all the lines in the files as queued up in the `line_generator()` function.
### Environment info
datasets.__version__ == '2.13.1'
Python 3.9.6
Platform: Darwin WE35261 22.5.0 Darwin Kernel Version 22.5.0: Thu Jun 8 22:22:22 PDT 2023; root:xnu-8796.121.3~7/RELEASE_X86_64 x86_64
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6118/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6118/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6117 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6117/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6117/comments | https://api.github.com/repos/huggingface/datasets/issues/6117/events | https://github.com/huggingface/datasets/pull/6117 | 1,835,213,848 | PR_kwDODunzps5XHktw | 6,117 | Set dev version | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-08-03T14:46:04 | 2023-08-03T14:56:59 | 2023-08-03T14:46:18 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6117/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6117/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6117",
"html_url": "https://github.com/huggingface/datasets/pull/6117",
"diff_url": "https://github.com/huggingface/datasets/pull/6117.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6117.patch",
"merged_at": "2023-08-03T14:46:18"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6116 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6116/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6116/comments | https://api.github.com/repos/huggingface/datasets/issues/6116/events | https://github.com/huggingface/datasets/issues/6116 | 1,835,098,484 | I_kwDODunzps5tYWF0 | 6,116 | [Docs] The "Process" how-to guide lacks description of `select_columns` function | {
"login": "unifyh",
"id": 18213435,
"node_id": "MDQ6VXNlcjE4MjEzNDM1",
"avatar_url": "https://avatars.githubusercontent.com/u/18213435?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/unifyh",
"html_url": "https://github.com/unifyh",
"followers_url": "https://api.github.com/users/unifyh/followers",
"following_url": "https://api.github.com/users/unifyh/following{/other_user}",
"gists_url": "https://api.github.com/users/unifyh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/unifyh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/unifyh/subscriptions",
"organizations_url": "https://api.github.com/users/unifyh/orgs",
"repos_url": "https://api.github.com/users/unifyh/repos",
"events_url": "https://api.github.com/users/unifyh/events{/privacy}",
"received_events_url": "https://api.github.com/users/unifyh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 2023-08-03T13:45:10 | 2023-08-16T10:02:53 | 2023-08-16T10:02:53 | CONTRIBUTOR | null | ### Feature request
The [how to process dataset guide](https://huggingface.co/docs/datasets/main/en/process) currently does not mention the [`select_columns`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.select_columns) function. It would be nice to include it in the guide.
### Motivation
This function is a commonly requested feature (see this [forum thread](https://discuss.huggingface.co/t/how-to-create-a-new-dataset-from-another-dataset-and-select-specific-columns-and-the-data-along-with-the-column/15120) and #5468 #5474). However, it has not been included in the guide since its implementation by PR #5480.
Mentioning it in the guide would help future users discover this added feature.
### Your contribution
I could submit a PR to add a brief description of the function to said guide. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6116/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6116/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6115 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6115/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6115/comments | https://api.github.com/repos/huggingface/datasets/issues/6115/events | https://github.com/huggingface/datasets/pull/6115 | 1,834,765,485 | PR_kwDODunzps5XGChP | 6,115 | Release: 2.14.3 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 6 | 2023-08-03T10:18:32 | 2023-08-03T15:08:02 | 2023-08-03T10:24:57 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6115/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6115/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6115",
"html_url": "https://github.com/huggingface/datasets/pull/6115",
"diff_url": "https://github.com/huggingface/datasets/pull/6115.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6115.patch",
"merged_at": "2023-08-03T10:24:57"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6114 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6114/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6114/comments | https://api.github.com/repos/huggingface/datasets/issues/6114/events | https://github.com/huggingface/datasets/issues/6114 | 1,834,015,584 | I_kwDODunzps5tUNtg | 6,114 | Cache not being used when loading commonvoice 8.0.0 | {
"login": "clabornd",
"id": 31082141,
"node_id": "MDQ6VXNlcjMxMDgyMTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/31082141?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clabornd",
"html_url": "https://github.com/clabornd",
"followers_url": "https://api.github.com/users/clabornd/followers",
"following_url": "https://api.github.com/users/clabornd/following{/other_user}",
"gists_url": "https://api.github.com/users/clabornd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/clabornd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clabornd/subscriptions",
"organizations_url": "https://api.github.com/users/clabornd/orgs",
"repos_url": "https://api.github.com/users/clabornd/repos",
"events_url": "https://api.github.com/users/clabornd/events{/privacy}",
"received_events_url": "https://api.github.com/users/clabornd/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-08-02T23:18:11 | 2023-08-18T23:59:00 | 2023-08-18T23:59:00 | NONE | null | ### Describe the bug
I have commonvoice 8.0.0 downloaded in `~/.cache/huggingface/datasets/mozilla-foundation___common_voice_8_0/en/8.0.0/b2f8b72f8f30b2e98c41ccf855954d9e35a5fa498c43332df198534ff9797a4a`. The folder contains all the arrow files etc, and was used as the cached version last time I touched the ec2 instance I'm working on. Now, with the same command that downloaded it initially:
```
dataset = load_dataset("mozilla-foundation/common_voice_8_0", "en", use_auth_token="<mytoken>")
```
it tries to redownload the dataset to `~/.cache/huggingface/datasets/mozilla-foundation___common_voice_8_0/en/8.0.0/05bdc7940b0a336ceeaeef13470c89522c29a8e4494cbeece64fb472a87acb32`
### Steps to reproduce the bug
Steps to reproduce the behavior:
1. ```dataset = load_dataset("mozilla-foundation/common_voice_8_0", "en", use_auth_token="<mytoken>")```
2. dataset is updated by maintainers
3. ```dataset = load_dataset("mozilla-foundation/common_voice_8_0", "en", use_auth_token="<mytoken>")```
### Expected behavior
I expect that it uses the already downloaded data in `~/.cache/huggingface/datasets/mozilla-foundation___common_voice_8_0/en/8.0.0/b2f8b72f8f30b2e98c41ccf855954d9e35a5fa498c43332df198534ff9797a4a`.
Not sure what's happening in 2. but if, say it's an issue with the dataset referenced by "mozilla-foundation/common_voice_8_0" being modified by the maintainers, how would I force datasets to point to the original version I downloaded?
EDIT: It was indeed that the maintainers had updated the dataset (v 8.0.0). However I still cant load the dataset from disk instead of redownloading, with for example:
```
load_dataset(".cache/huggingface/datasets/downloads/extracted/<hash>/cv-corpus-8.0-2022-01-19/en/", "en")
> ...
> File [~/miniconda3/envs/aa_torch2/lib/python3.10/site-packages/datasets/table.py:1938](.../ python3.10/site-packages/datasets/table.py:1938), in cast_array_to_feature(array, feature, allow_number_to_str)
1937 elif not isinstance(feature, (Sequence, dict, list, tuple)):
-> 1938 return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)
...
1794 e = e.__context__
-> 1795 raise DatasetGenerationError("An error occurred while generating the dataset") from e
1797 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)
DatasetGenerationError: An error occurred while generating the dataset
```
### Environment info
datasets==2.7.0
python==3.10.8
OS: AWS Linux | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6114/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6114/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6113 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6113/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6113/comments | https://api.github.com/repos/huggingface/datasets/issues/6113/events | https://github.com/huggingface/datasets/issues/6113 | 1,833,854,030 | I_kwDODunzps5tTmRO | 6,113 | load_dataset() fails with streamlit caching inside docker | {
"login": "fierval",
"id": 987574,
"node_id": "MDQ6VXNlcjk4NzU3NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/987574?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fierval",
"html_url": "https://github.com/fierval",
"followers_url": "https://api.github.com/users/fierval/followers",
"following_url": "https://api.github.com/users/fierval/following{/other_user}",
"gists_url": "https://api.github.com/users/fierval/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fierval/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fierval/subscriptions",
"organizations_url": "https://api.github.com/users/fierval/orgs",
"repos_url": "https://api.github.com/users/fierval/repos",
"events_url": "https://api.github.com/users/fierval/events{/privacy}",
"received_events_url": "https://api.github.com/users/fierval/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-08-02T20:20:26 | 2023-08-21T18:18:27 | 2023-08-21T18:18:27 | NONE | null | ### Describe the bug
When calling `load_dataset` in a streamlit application running within a docker container, get a failure with the error message:
EmptyDatasetError: The directory at hf://datasets/fetch-rewards/inc-rings-2000@bea27cf60842b3641eae418f38864a2ec4cde684 doesn't contain any data files
Traceback:
File "/opt/conda/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 552, in _run_script
exec(code, module.__dict__)
File "/home/user/app/app.py", line 62, in <module>
dashboard()
File "/home/user/app/app.py", line 47, in dashboard
feat_dict, path_gml = load_data(hf_repo, model_gml_dict[selected_model], hf_token)
File "/opt/conda/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 211, in wrapper
return cached_func(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 240, in __call__
return self._get_or_create_cached_value(args, kwargs)
File "/opt/conda/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 266, in _get_or_create_cached_value
return self._handle_cache_miss(cache, value_key, func_args, func_kwargs)
File "/opt/conda/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 320, in _handle_cache_miss
computed_value = self._info.func(*func_args, **func_kwargs)
File "/home/user/app/hf_interface.py", line 16, in load_data
hf_dataset = load_dataset(repo_id, use_auth_token=hf_token)
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 2109, in load_dataset
builder_instance = load_dataset_builder(
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1795, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1486, in dataset_module_factory
raise e1 from None
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1476, in dataset_module_factory
).get_module()
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1032, in get_module
else get_data_patterns(base_path, download_config=self.download_config)
File "/opt/conda/lib/python3.10/site-packages/datasets/data_files.py", line 458, in get_data_patterns
raise EmptyDatasetError(f"The directory at {base_path} doesn't contain any data files") from None
### Steps to reproduce the bug
```python
@st.cache_resource
def load_data(repo_id: str, hf_token=None):
"""Load data from HuggingFace Hub
"""
hf_dataset = load_dataset(repo_id, use_auth_token=hf_token)
hf_dataset = hf_dataset.map(lambda x: json.loads(x["ground_truth"]), remove_columns=["ground_truth"])
return hf_dataset
```
### Expected behavior
Expect to load.
Note: works fine with datasets==2.13.1
### Environment info
datasets==2.14.2,
Ubuntu bionic-based Docker container. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6113/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6113/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6112 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6112/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6112/comments | https://api.github.com/repos/huggingface/datasets/issues/6112/events | https://github.com/huggingface/datasets/issues/6112 | 1,833,693,299 | I_kwDODunzps5tS_Bz | 6,112 | yaml error using push_to_hub with generated README.md | {
"login": "kevintee",
"id": 1643887,
"node_id": "MDQ6VXNlcjE2NDM4ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1643887?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kevintee",
"html_url": "https://github.com/kevintee",
"followers_url": "https://api.github.com/users/kevintee/followers",
"following_url": "https://api.github.com/users/kevintee/following{/other_user}",
"gists_url": "https://api.github.com/users/kevintee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kevintee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kevintee/subscriptions",
"organizations_url": "https://api.github.com/users/kevintee/orgs",
"repos_url": "https://api.github.com/users/kevintee/repos",
"events_url": "https://api.github.com/users/kevintee/events{/privacy}",
"received_events_url": "https://api.github.com/users/kevintee/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | 1 | 2023-08-02T18:21:21 | 2023-08-17T16:53:24 | null | NONE | null | ### Describe the bug
When I construct a dataset with the following features:
```
features = Features(
{
"pixel_values": Array3D(dtype="float64", shape=(3, 224, 224)),
"input_ids": Sequence(feature=Value(dtype="int64")),
"attention_mask": Sequence(Value(dtype="int64")),
"tokens": Sequence(Value(dtype="string")),
"bbox": Array2D(dtype="int64", shape=(512, 4)),
}
)
```
and run `push_to_hub`, the individual `*.parquet` files are pushed, but when trying to upload the auto-generated README, I run into the following error:
```
Traceback (most recent call last):
File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 261, in hf_raise_for_status
response.raise_for_status()
File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/requests/models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://huggingface.co/api/datasets/looppayments/multitask_document_classification_dataset/commit/main
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/kevintee/loop-payments/ml/src/ml/data_scripts/build_document_classification_training_data.py", line 297, in <module>
build_dataset()
File "/Users/kevintee/loop-payments/ml/src/ml/data_scripts/build_document_classification_training_data.py", line 290, in build_dataset
push_to_hub(dataset, "multitask_document_classification_dataset")
File "/Users/kevintee/loop-payments/ml/src/ml/data_scripts/build_document_classification_training_data.py", line 135, in push_to_hub
dataset.push_to_hub(f"looppayments/{dataset_name}", private=True)
File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 5577, in push_to_hub
HfApi(endpoint=config.HF_ENDPOINT).upload_file(
File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 828, in _inner
return fn(self, *args, **kwargs)
File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 3221, in upload_file
commit_info = self.create_commit(
File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 828, in _inner
return fn(self, *args, **kwargs)
File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 2728, in create_commit
hf_raise_for_status(commit_resp, endpoint_name="commit")
File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 299, in hf_raise_for_status
raise BadRequestError(message, response=response) from e
huggingface_hub.utils._errors.BadRequestError: (Request ID: Root=1-64ca9c3d-2d2bbef354e102482a9a168e;bc00371c-8549-4859-9f41-43ff140ad36e)
Bad request for commit endpoint:
Invalid YAML in README.md: unknown tag !<tag:yaml.org,2002:python/tuple> (10:9)
7 | - 3
8 | - 224
9 | - 224
10 | dtype: float64
--------------^
11 | - name: input_ids
12 | sequence: int64
```
My guess is that the auto-generated yaml is unable to be parsed for some reason.
### Steps to reproduce the bug
The description contains most of what's needed to reproduce the issue, but I've added a shortened code snippet:
```
from datasets import Array2D, Array3D, ClassLabel, Dataset, Features, Sequence, Value
from PIL import Image
from transformers import AutoProcessor
features = Features(
{
"pixel_values": Array3D(dtype="float64", shape=(3, 224, 224)),
"input_ids": Sequence(feature=Value(dtype="int64")),
"attention_mask": Sequence(Value(dtype="int64")),
"tokens": Sequence(Value(dtype="string")),
"bbox": Array2D(dtype="int64", shape=(512, 4)),
}
)
processor = AutoProcessor.from_pretrained("microsoft/layoutlmv3-base", apply_ocr=False)
def preprocess_dataset(rows):
# Get images
images = [
Image.open(png_filename).convert("RGB") for png_filename in rows["png_filename"]
]
encoding = processor(
images,
rows["tokens"],
boxes=rows["bbox"],
truncation=True,
padding="max_length",
)
encoding["tokens"] = rows["tokens"]
return encoding
dataset = dataset.map(
preprocess_dataset,
batched=True,
batch_size=5,
features=features,
)
```
### Expected behavior
Using datasets==2.11.0, I'm able to succesfully push_to_hub, no issues, but with datasets==2.14.2, I run into the above error.
### Environment info
- `datasets` version: 2.14.2
- Platform: macOS-12.5-arm64-arm-64bit
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 1.5.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6112/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6112/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6111 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6111/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6111/comments | https://api.github.com/repos/huggingface/datasets/issues/6111/events | https://github.com/huggingface/datasets/issues/6111 | 1,832,781,654 | I_kwDODunzps5tPgdW | 6,111 | raise FileNotFoundError("Directory {dataset_path} is neither a `Dataset` directory nor a `DatasetDict` directory." ) | {
"login": "2catycm",
"id": 41530341,
"node_id": "MDQ6VXNlcjQxNTMwMzQx",
"avatar_url": "https://avatars.githubusercontent.com/u/41530341?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/2catycm",
"html_url": "https://github.com/2catycm",
"followers_url": "https://api.github.com/users/2catycm/followers",
"following_url": "https://api.github.com/users/2catycm/following{/other_user}",
"gists_url": "https://api.github.com/users/2catycm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/2catycm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/2catycm/subscriptions",
"organizations_url": "https://api.github.com/users/2catycm/orgs",
"repos_url": "https://api.github.com/users/2catycm/repos",
"events_url": "https://api.github.com/users/2catycm/events{/privacy}",
"received_events_url": "https://api.github.com/users/2catycm/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-08-02T09:17:29 | 2023-08-29T02:00:28 | 2023-08-29T02:00:28 | NONE | null | ### Describe the bug
For researchers in some countries or regions, it is usually the case that the download ability of `load_dataset` is disabled due to the complex network environment. People in these regions often prefer to use git clone or other programming tricks to manually download the files to the disk (for example, [How to elegantly download hf models, zhihu zhuanlan](https://zhuanlan.zhihu.com/p/475260268) proposed a crawlder based solution, and [Is there any mirror for hf_hub, zhihu answer](https://www.zhihu.com/question/371644077) provided some cloud based solutions, and [How to avoid pitfalls on Hugging face downloading, zhihu zhuanlan] gave some useful suggestions), and then use `load_from_disk` to get the dataset object.
However, when one finally has the local files on the disk, it is still buggy when trying to load the files into objects.
### Steps to reproduce the bug
Steps to reproduce the bug:
1. Found CIFAR dataset in hugging face: https://huggingface.co/datasets/cifar100/tree/main
2. Click ":" button to show "Clone repository" option, and then follow the prompts on the box:
```bash
cd my_directory_absolute
git lfs install
git clone https://huggingface.co/datasets/cifar100
ls my_directory_absolute/cifar100 # confirm that the directory exists and it is OK.
```
3. Write A python file to try to load the dataset
```python
from datasets import load_dataset, load_from_disk
dataset = load_from_disk("my_directory_absolute/cifar100")
```
Notice that according to issue #3700 , it is wrong to use load_dataset("my_directory_absolute/cifar100"), so we must use load_from_disk instead.
4. Then you will see the error reported:
```log
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
Cell In[5], line 9
1 from datasets import load_dataset, load_from_disk
----> 9 dataset = load_from_disk("my_directory_absolute/cifar100")
File [~/miniconda3/envs/ai/lib/python3.10/site-packages/datasets/load.py:2232), in load_from_disk(dataset_path, fs, keep_in_memory, storage_options)
2230 return DatasetDict.load_from_disk(dataset_path, keep_in_memory=keep_in_memory, storage_options=storage_options)
2231 else:
-> 2232 raise FileNotFoundError(
2233 f"Directory {dataset_path} is neither a `Dataset` directory nor a `DatasetDict` directory."
2234 )
FileNotFoundError: Directory my_directory_absolute/cifar100 is neither a `Dataset` directory nor a `DatasetDict` directory.
```
### Expected behavior
The dataset should be load successfully.
### Environment info
```bash
datasets-cli env
```
-> results:
```txt
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.14.2
- Platform: Linux-4.18.0-372.32.1.el8_6.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 2.0.3
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6111/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6111/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6110 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6110/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6110/comments | https://api.github.com/repos/huggingface/datasets/issues/6110/events | https://github.com/huggingface/datasets/issues/6110 | 1,831,110,633 | I_kwDODunzps5tJIfp | 6,110 | [BUG] Dataset initialized from in-memory data does not create cache. | {
"login": "MattYoon",
"id": 57797966,
"node_id": "MDQ6VXNlcjU3Nzk3OTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/57797966?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MattYoon",
"html_url": "https://github.com/MattYoon",
"followers_url": "https://api.github.com/users/MattYoon/followers",
"following_url": "https://api.github.com/users/MattYoon/following{/other_user}",
"gists_url": "https://api.github.com/users/MattYoon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MattYoon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MattYoon/subscriptions",
"organizations_url": "https://api.github.com/users/MattYoon/orgs",
"repos_url": "https://api.github.com/users/MattYoon/repos",
"events_url": "https://api.github.com/users/MattYoon/events{/privacy}",
"received_events_url": "https://api.github.com/users/MattYoon/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-08-01T11:58:58 | 2023-08-17T14:03:01 | 2023-08-17T14:03:00 | NONE | null | ### Describe the bug
`Dataset` initialized from in-memory data (dictionary in my case, haven't tested with other types) does not create cache when processed with the `map` method, unlike `Dataset` initialized by other methods such as `load_dataset`.
### Steps to reproduce the bug
```python
# below code was run the second time so the map function can be loaded from cache if exists
from datasets import load_dataset, Dataset
dataset = load_dataset("tatsu-lab/alpaca")['train']
dataset = dataset.map(lambda x: {'input': x['input'] + 'hi'}) # some random map
print(len(dataset.cache_files))
# 1
# copy the exact same data but initialize from a dictionary
memory_dataset = Dataset.from_dict({
'instruction': dataset['instruction'],
'input': dataset['input'],
'output': dataset['output'],
'text': dataset['text']})
memory_dataset = memory_dataset.map(lambda x: {'input': x['input'] + 'hi'}) # exact same map
print(len(memory_dataset.cache_files))
# Map: 100%|██████████| 52002[/52002]
# 0
```
### Expected behavior
The `map` function should create cache regardless of the method the `Dataset` was created.
### Environment info
- `datasets` version: 2.14.2
- Platform: Linux-5.15.0-41-generic-x86_64-with-glibc2.31
- Python version: 3.9.16
- Huggingface_hub version: 0.14.1
- PyArrow version: 11.0.0
- Pandas version: 1.5.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6110/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6110/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6109 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6109/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6109/comments | https://api.github.com/repos/huggingface/datasets/issues/6109/events | https://github.com/huggingface/datasets/issues/6109 | 1,830,753,793 | I_kwDODunzps5tHxYB | 6,109 | Problems in downloading Amazon reviews from HF | {
"login": "610v4nn1",
"id": 52964960,
"node_id": "MDQ6VXNlcjUyOTY0OTYw",
"avatar_url": "https://avatars.githubusercontent.com/u/52964960?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/610v4nn1",
"html_url": "https://github.com/610v4nn1",
"followers_url": "https://api.github.com/users/610v4nn1/followers",
"following_url": "https://api.github.com/users/610v4nn1/following{/other_user}",
"gists_url": "https://api.github.com/users/610v4nn1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/610v4nn1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/610v4nn1/subscriptions",
"organizations_url": "https://api.github.com/users/610v4nn1/orgs",
"repos_url": "https://api.github.com/users/610v4nn1/repos",
"events_url": "https://api.github.com/users/610v4nn1/events{/privacy}",
"received_events_url": "https://api.github.com/users/610v4nn1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | 1 | 2023-08-01T08:38:29 | 2023-08-02T07:12:07 | 2023-08-02T07:12:07 | NONE | null | ### Describe the bug
I have a script downloading `amazon_reviews_multi`.
When the download starts, I get
```
Downloading data files: 0%| | 0/1 [00:00<?, ?it/s]
Downloading data: 243B [00:00, 1.43MB/s]
Downloading data files: 100%|██████████| 1/1 [00:01<00:00, 1.54s/it]
Extracting data files: 100%|██████████| 1/1 [00:00<00:00, 842.40it/s]
Downloading data files: 0%| | 0/1 [00:00<?, ?it/s]
Downloading data: 243B [00:00, 928kB/s]
Downloading data files: 100%|██████████| 1/1 [00:01<00:00, 1.42s/it]
Extracting data files: 100%|██████████| 1/1 [00:00<00:00, 832.70it/s]
Downloading data files: 0%| | 0/1 [00:00<?, ?it/s]
Downloading data: 243B [00:00, 1.81MB/s]
Downloading data files: 100%|██████████| 1/1 [00:01<00:00, 1.40s/it]
Extracting data files: 100%|██████████| 1/1 [00:00<00:00, 1294.14it/s]
Generating train split: 0%| | 0/200000 [00:00<?, ? examples/s]
```
the file is clearly too small to contain the requested dataset, in fact it contains en error message:
```
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>AGJWSY3ZADT2QVWE</RequestId><HostId>Gx1O2KXnxtQFqvzDLxyVSTq3+TTJuTnuVFnJL3SP89Yp8UzvYLPTVwd1PpniE4EvQzT3tCaqEJw=</HostId></Error>
```
obviously the script fails:
```
> raise DatasetGenerationError("An error occurred while generating the dataset") from e
E datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
```
### Steps to reproduce the bug
1. load_dataset("amazon_reviews_multi", name="en", split="train", cache_dir="ADDYOURPATHHERE")
### Expected behavior
I would expect the dataset to be downloaded and processed
### Environment info
* The problem is present with both datasets 2.12.0 and 2.14.2
* python version 3.10.12 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6109/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6109/timeline | null | not_planned | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6108 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6108/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6108/comments | https://api.github.com/repos/huggingface/datasets/issues/6108/events | https://github.com/huggingface/datasets/issues/6108 | 1,830,347,187 | I_kwDODunzps5tGOGz | 6,108 | Loading local datasets got strangely stuck | {
"login": "LoveCatc",
"id": 48412571,
"node_id": "MDQ6VXNlcjQ4NDEyNTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/48412571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LoveCatc",
"html_url": "https://github.com/LoveCatc",
"followers_url": "https://api.github.com/users/LoveCatc/followers",
"following_url": "https://api.github.com/users/LoveCatc/following{/other_user}",
"gists_url": "https://api.github.com/users/LoveCatc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LoveCatc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LoveCatc/subscriptions",
"organizations_url": "https://api.github.com/users/LoveCatc/orgs",
"repos_url": "https://api.github.com/users/LoveCatc/repos",
"events_url": "https://api.github.com/users/LoveCatc/events{/privacy}",
"received_events_url": "https://api.github.com/users/LoveCatc/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 3 | 2023-08-01T02:28:06 | 2023-08-17T17:36:45 | null | NONE | null | ### Describe the bug
I try to use `load_dataset()` to load several local `.jsonl` files as a dataset. Every line of these files is a json structure only containing one key `text` (yeah it is a dataset for NLP model). The code snippet is as:
```python
ds = load_dataset("json", data_files=LIST_OF_FILE_PATHS, num_proc=16)['train']
```
However, I found that the loading process can get stuck -- the progress bar `Generating train split` no more proceed. When I was trying to find the cause and solution, I found a really strange behavior. If I load the dataset in this way:
```python
dlist = list()
for _ in LIST_OF_FILE_PATHS:
dlist.append(load_dataset("json", data_files=_)['train'])
ds = concatenate_datasets(dlist)
```
I can actually successfully load all the files despite its slow speed. But if I load them in batch like above, things go wrong. I did try to use Control-C to trace the stuck point but the program cannot be terminated in this way when `num_proc` is set to `None`. The only thing I can do is use Control-Z to hang it up then kill it. If I use more than 2 cpus, a Control-C would simply cause the following error:
```bash
^C
Process ForkPoolWorker-1:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/multiprocess/process.py", line 314, in _bootstrap
self.run()
File "/usr/local/lib/python3.10/dist-packages/multiprocess/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.10/dist-packages/multiprocess/pool.py", line 114, in worker
task = get()
File "/usr/local/lib/python3.10/dist-packages/multiprocess/queues.py", line 368, in get
res = self._reader.recv_bytes()
File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 224, in recv_bytes
buf = self._recv_bytes(maxlength)
File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 422, in _recv_bytes
buf = self._recv(4)
File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 387, in _recv
chunk = read(handle, remaining)
KeyboardInterrupt
Generating train split: 92431 examples [01:23, 1104.25 examples/s]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 1373, in iflatmap_unordered
yield queue.get(timeout=0.05)
File "<string>", line 2, in get
File "/usr/local/lib/python3.10/dist-packages/multiprocess/managers.py", line 818, in _callmethod
kind, result = conn.recv()
File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 258, in recv
buf = self._recv_bytes()
File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 422, in _recv_bytes
buf = self._recv(4)
File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 387, in _recv
chunk = read(handle, remaining)
KeyboardInterrupt
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/mnt/data/liyongyuan/source/batch_load.py", line 11, in <module>
a = load_dataset(
File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 2133, in load_dataset
builder_instance.download_and_prepare(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 954, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1049, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1842, in _prepare_split
for job_id, done, content in iflatmap_unordered(
File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 1387, in iflatmap_unordered
[async_result.get(timeout=0.05) for async_result in async_results]
File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 1387, in <listcomp>
[async_result.get(timeout=0.05) for async_result in async_results]
File "/usr/local/lib/python3.10/dist-packages/multiprocess/pool.py", line 770, in get
raise TimeoutError
multiprocess.context.TimeoutError
```
I have validated the basic correctness of these `.jsonl` files. They are correctly formatted (or they cannot be loaded singly by `load_dataset`) though some of the json may contain too long text (more than 1e7 characters). I do not know if this could be the problem. And there should not be any bottleneck in system's resource. The whole dataset is ~300GB, and I am using a cloud server with plenty of storage and 1TB ram.
Thanks for your efforts and patience! Any suggestion or help would be appreciated.
### Steps to reproduce the bug
1. use load_dataset() with `data_files = LIST_OF_FILES`
### Expected behavior
All the files should be smoothly loaded.
### Environment info
- Datasets: A private dataset. ~2500 `.jsonl` files. ~300GB in total. Each json structure only contains one key: `text`. Format checked.
- `datasets` version: 2.14.2
- Platform: Linux-4.19.91-014.kangaroo.alios7.x86_64-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.15.1
- PyArrow version: 10.0.1.dev0+ga6eabc2b.d20230609
- Pandas version: 1.5.2 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6108/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6108/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6107 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6107/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6107/comments | https://api.github.com/repos/huggingface/datasets/issues/6107/events | https://github.com/huggingface/datasets/pull/6107 | 1,829,625,320 | PR_kwDODunzps5W0rLR | 6,107 | Fix deprecation of use_auth_token in file_utils | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-07-31T16:32:01 | 2023-08-03T10:13:32 | 2023-08-03T10:04:18 | MEMBER | null | Fix issues with the deprecation of `use_auth_token` introduced by:
- #5996
in functions:
- `get_authentication_headers_for_url`
- `request_etag`
- `get_from_cache`
Currently, `TypeError` is raised: https://github.com/huggingface/datasets-server/actions/runs/5711650666/job/15484685570?pr=1588
```
FAILED tests/job_runners/config/test_parquet_and_info.py::test__is_too_big_external_files[None-None-False] - TypeError: get_authentication_headers_for_url() got an unexpected keyword argument 'use_auth_token'
FAILED tests/job_runners/config/test_parquet_and_info.py::test_fill_builder_info[None-False] - libcommon.exceptions.FileSystemError: Could not read the parquet files: get_authentication_headers_for_url() got an unexpected keyword argument 'use_auth_token'
```
Related to:
- #6094 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6107/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6107/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6107",
"html_url": "https://github.com/huggingface/datasets/pull/6107",
"diff_url": "https://github.com/huggingface/datasets/pull/6107.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6107.patch",
"merged_at": "2023-08-03T10:04:18"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6106 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6106/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6106/comments | https://api.github.com/repos/huggingface/datasets/issues/6106/events | https://github.com/huggingface/datasets/issues/6106 | 1,829,131,223 | I_kwDODunzps5tBlPX | 6,106 | load local json_file as dataset | {
"login": "CiaoHe",
"id": 39040787,
"node_id": "MDQ6VXNlcjM5MDQwNzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/39040787?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CiaoHe",
"html_url": "https://github.com/CiaoHe",
"followers_url": "https://api.github.com/users/CiaoHe/followers",
"following_url": "https://api.github.com/users/CiaoHe/following{/other_user}",
"gists_url": "https://api.github.com/users/CiaoHe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CiaoHe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CiaoHe/subscriptions",
"organizations_url": "https://api.github.com/users/CiaoHe/orgs",
"repos_url": "https://api.github.com/users/CiaoHe/repos",
"events_url": "https://api.github.com/users/CiaoHe/events{/privacy}",
"received_events_url": "https://api.github.com/users/CiaoHe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-07-31T12:53:49 | 2023-08-18T01:46:35 | 2023-08-18T01:46:35 | NONE | null | ### Describe the bug
I tried to load local json file as dataset but failed to parsing json file because some columns are 'float' type.
### Steps to reproduce the bug
1. load json file with certain columns are 'float' type. For example `data = load_data("json", data_files=JSON_PATH)`
2. Then, the error will be triggered like `ArrowInvalid: Could not convert '-0.2253' with type str: tried to convert to double
### Expected behavior
Should allow some columns are 'float' type, at least it should convert those columns to str type.
I tried to avoid the error by naively convert the float item to str:
```python
# if col type is not str, we need to convert it to str
mapping = {}
for col in keys:
if isinstance(dataset[0][col], str):
mapping[col] = [row.get(col) for row in dataset]
else:
mapping[col] = [str(row.get(col)) for row in dataset]
```
### Environment info
- `datasets` version: 2.14.2
- Platform: Linux-5.4.0-52-generic-x86_64-with-glibc2.31
- Python version: 3.9.16
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.0
- Pandas version: 2.0.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6106/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6106/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6105 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6105/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6105/comments | https://api.github.com/repos/huggingface/datasets/issues/6105/events | https://github.com/huggingface/datasets/pull/6105 | 1,829,008,430 | PR_kwDODunzps5WyiJD | 6,105 | Fix error when loading from GCP bucket | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 5 | 2023-07-31T11:44:46 | 2023-08-01T10:48:52 | 2023-08-01T10:38:54 | MEMBER | null | Fix `resolve_pattern` for filesystems with tuple protocol.
Fix #6100.
The bug code lines were introduced by:
- #6028 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6105/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6105/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6105",
"html_url": "https://github.com/huggingface/datasets/pull/6105",
"diff_url": "https://github.com/huggingface/datasets/pull/6105.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6105.patch",
"merged_at": "2023-08-01T10:38:54"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6104 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6104/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6104/comments | https://api.github.com/repos/huggingface/datasets/issues/6104/events | https://github.com/huggingface/datasets/issues/6104 | 1,828,959,107 | I_kwDODunzps5tA7OD | 6,104 | HF Datasets data access is extremely slow even when in memory | {
"login": "NightMachinery",
"id": 36224762,
"node_id": "MDQ6VXNlcjM2MjI0NzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/36224762?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NightMachinery",
"html_url": "https://github.com/NightMachinery",
"followers_url": "https://api.github.com/users/NightMachinery/followers",
"following_url": "https://api.github.com/users/NightMachinery/following{/other_user}",
"gists_url": "https://api.github.com/users/NightMachinery/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NightMachinery/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NightMachinery/subscriptions",
"organizations_url": "https://api.github.com/users/NightMachinery/orgs",
"repos_url": "https://api.github.com/users/NightMachinery/repos",
"events_url": "https://api.github.com/users/NightMachinery/events{/privacy}",
"received_events_url": "https://api.github.com/users/NightMachinery/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2023-07-31T11:12:19 | 2023-08-01T11:22:43 | null | CONTRIBUTOR | null | ### Describe the bug
Doing a simple `some_dataset[:10]` can take more than a minute.
Profiling it:
<img width="1280" alt="image" src="https://github.com/huggingface/datasets/assets/36224762/e641fb95-ff02-4072-9016-5416a65f75ab">
`some_dataset` is completely in memory with no disk cache.
This is proving fatal to my usage of HF Datasets. Is there a way I can forgo the arrow format and store the dataset as PyTorch tensors so that `_tensorize` is not needed? And is `_consolidate` supposed to take this long?
It's faster to produce the dataset from scratch than to access it from HF Datasets!
### Steps to reproduce the bug
I have uploaded the dataset that causes this problem [here](https://huggingface.co/datasets/NightMachinery/hf_datasets_bug1).
```python
#!/usr/bin/env python3
import sys
import time
import torch
from datasets import load_dataset
def main(dataset_name):
# Start the timer
start_time = time.time()
# Load the dataset from Hugging Face Hub
dataset = load_dataset(dataset_name)
# Set the dataset format as torch
dataset.set_format(type="torch")
# Perform an identity map
dataset = dataset.map(lambda example: example, batched=True, batch_size=20)
# End the timer
end_time = time.time()
# Print the time taken
print(f"Time taken: {end_time - start_time:.2f} seconds")
if __name__ == "__main__":
dataset_name = "NightMachinery/hf_datasets_bug1"
print(f"dataset_name: {dataset_name}")
main(dataset_name)
```
### Expected behavior
_
### Environment info
- `datasets` version: 2.13.1
- Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 2.0.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6104/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6104/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6103 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6103/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6103/comments | https://api.github.com/repos/huggingface/datasets/issues/6103/events | https://github.com/huggingface/datasets/pull/6103 | 1,828,515,165 | PR_kwDODunzps5Ww2gV | 6,103 | Set dev version | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-07-31T06:44:05 | 2023-07-31T06:55:58 | 2023-07-31T06:45:41 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6103/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6103/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6103",
"html_url": "https://github.com/huggingface/datasets/pull/6103",
"diff_url": "https://github.com/huggingface/datasets/pull/6103.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6103.patch",
"merged_at": "2023-07-31T06:45:41"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6102 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6102/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6102/comments | https://api.github.com/repos/huggingface/datasets/issues/6102/events | https://github.com/huggingface/datasets/pull/6102 | 1,828,494,896 | PR_kwDODunzps5WwyGy | 6,102 | Release 2.14.2 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-07-31T06:27:47 | 2023-07-31T06:48:09 | 2023-07-31T06:32:58 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6102/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6102/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6102",
"html_url": "https://github.com/huggingface/datasets/pull/6102",
"diff_url": "https://github.com/huggingface/datasets/pull/6102.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6102.patch",
"merged_at": "2023-07-31T06:32:58"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6101 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6101/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6101/comments | https://api.github.com/repos/huggingface/datasets/issues/6101/events | https://github.com/huggingface/datasets/pull/6101 | 1,828,469,648 | PR_kwDODunzps5WwspW | 6,101 | Release 2.14.2 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-07-31T06:05:36 | 2023-07-31T06:33:00 | 2023-07-31T06:18:17 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6101/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6101/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6101",
"html_url": "https://github.com/huggingface/datasets/pull/6101",
"diff_url": "https://github.com/huggingface/datasets/pull/6101.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6101.patch",
"merged_at": "2023-07-31T06:18:17"
} | true |