url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.58B
| node_id
stringlengths 18
32
| number
int64 1
5.51k
| title
stringlengths 1
276
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | comments
int64 0
54
| created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| author_association
stringclasses 3
values | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| draft
bool 2
classes | pull_request
dict | closed_at
stringlengths 20
20
⌀ | state_reason
stringclasses 3
values | assignee
dict | assignees
list | milestone
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/5514 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5514/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5514/comments | https://api.github.com/repos/huggingface/datasets/issues/5514/events | https://github.com/huggingface/datasets/issues/5514 | 1,576,453,837 | I_kwDODunzps5d9sbN | 5,514 | Improve inconsistency of `Dataset.map` interface for `load_from_cache_file` | {
"avatar_url": "https://avatars.githubusercontent.com/u/22773355?v=4",
"events_url": "https://api.github.com/users/HallerPatrick/events{/privacy}",
"followers_url": "https://api.github.com/users/HallerPatrick/followers",
"following_url": "https://api.github.com/users/HallerPatrick/following{/other_user}",
"gists_url": "https://api.github.com/users/HallerPatrick/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/HallerPatrick",
"id": 22773355,
"login": "HallerPatrick",
"node_id": "MDQ6VXNlcjIyNzczMzU1",
"organizations_url": "https://api.github.com/users/HallerPatrick/orgs",
"received_events_url": "https://api.github.com/users/HallerPatrick/received_events",
"repos_url": "https://api.github.com/users/HallerPatrick/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/HallerPatrick/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HallerPatrick/subscriptions",
"type": "User",
"url": "https://api.github.com/users/HallerPatrick"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | 1 | 2023-02-08T16:40:44Z | 2023-02-08T23:58:29Z | CONTRIBUTOR | ### Feature request
1. Replace the `load_from_cache_file` default value to `True`.
2. Remove or alter checks from `is_caching_enabled` logic.
### Motivation
I stumbled over an inconsistency in the `Dataset.map` interface. The documentation (and source) states for the parameter `load_from_cache_file`:
```
load_from_cache_file (`bool`, defaults to `True` if caching is enabled):
If a cache file storing the current computation from `function`
can be identified, use it instead of recomputing.
```
1. `load_from_cache_file` default value is `None`, while being annotated as `bool`
2. It is inconsistent with other method signatures like `filter`, that have the default value `True`
3. The logic is inconsistent, as the `map` method checks if caching is enabled through `is_caching_enabled`. This logic is not used for other similar methods.
### Your contribution
I am not fully aware of the logic behind caching checks. If this is just a inconsistency that historically grew, I would suggest to remove the `is_caching_enabled` logic as the "default" logic. Maybe someone can give insights, if environment variables have a higher priority than local variables or vice versa.
If this is clarified, I could adjust the source according to the "Feature request" section of this issue. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5514/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5514/timeline | null | {
"diff_url": null,
"html_url": null,
"merged_at": null,
"patch_url": null,
"url": null
} | null | null | {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
} | [] | {
"closed_at": null,
"closed_issues": null,
"created_at": null,
"creator": {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
},
"description": null,
"due_on": null,
"html_url": null,
"id": null,
"labels_url": null,
"node_id": null,
"number": null,
"open_issues": null,
"state": null,
"title": null,
"updated_at": null,
"url": null
} | false |
https://api.github.com/repos/huggingface/datasets/issues/5513 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5513/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5513/comments | https://api.github.com/repos/huggingface/datasets/issues/5513/events | https://github.com/huggingface/datasets/issues/5513 | 1,576,300,803 | I_kwDODunzps5d9HED | 5,513 | Some functions use a param named `type` shouldn't that be avoided since it's a Python reserved name? | {
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt"
} | [] | open | false | 2 | 2023-02-08T15:13:46Z | 2023-02-08T16:01:07Z | CONTRIBUTOR | Hi @mariosasko, @lhoestq, or whoever reads this! :)
After going through `ArrowDataset.set_format` I found out that the `type` param is actually named `type` which is a Python reserved name as you may already know, shouldn't that be renamed to `format_type` before the 3.0.0 is released?
Just wanted to get your input, and if applicable, tackle this issue myself! Thanks 🤗 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5513/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5513/timeline | null | {
"diff_url": null,
"html_url": null,
"merged_at": null,
"patch_url": null,
"url": null
} | null | null | {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
} | [] | {
"closed_at": null,
"closed_issues": null,
"created_at": null,
"creator": {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
},
"description": null,
"due_on": null,
"html_url": null,
"id": null,
"labels_url": null,
"node_id": null,
"number": null,
"open_issues": null,
"state": null,
"title": null,
"updated_at": null,
"url": null
} | false |
https://api.github.com/repos/huggingface/datasets/issues/5512 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5512/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5512/comments | https://api.github.com/repos/huggingface/datasets/issues/5512/events | https://github.com/huggingface/datasets/pull/5512 | 1,576,142,432 | PR_kwDODunzps5JhtQy | 5,512 | Speed up batched PyTorch DataLoader | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | open | false | 6 | 2023-02-08T13:38:59Z | 2023-02-08T16:16:04Z | MEMBER | I implemented `__getitems__` to speed up batched data loading in PyTorch
close https://github.com/huggingface/datasets/issues/5505 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5512/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5512/timeline | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/5512.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5512",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5512.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5512"
} | null | null | {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
} | [] | {
"closed_at": null,
"closed_issues": null,
"created_at": null,
"creator": {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
},
"description": null,
"due_on": null,
"html_url": null,
"id": null,
"labels_url": null,
"node_id": null,
"number": null,
"open_issues": null,
"state": null,
"title": null,
"updated_at": null,
"url": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5511 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5511/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5511/comments | https://api.github.com/repos/huggingface/datasets/issues/5511/events | https://github.com/huggingface/datasets/issues/5511 | 1,575,851,768 | I_kwDODunzps5d7Zb4 | 5,511 | Creating a dummy dataset from a bigger one | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [] | closed | false | 2 | 2023-02-08T10:18:41Z | 2023-02-08T10:35:48Z | MEMBER | ### Describe the bug
I often want to create a dummy dataset from a bigger dataset for fast iteration when training. However, I'm having a hard time doing this especially when trying to upload the dataset to the Hub.
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("lambdalabs/pokemon-blip-captions")
dataset["train"] = dataset["train"].select(range(20))
dataset.push_to_hub("patrickvonplaten/dummy_image_data")
```
gives:
```
~/python_bin/datasets/arrow_dataset.py in _push_parquet_shards_to_hub(self, repo_id, split, private, token, branch, max_shard_size, embed_external_files)
4003 base_wait_time=2.0,
4004 max_retries=5,
-> 4005 max_wait_time=20.0,
4006 )
4007 return repo_id, split, uploaded_size, dataset_nbytes
~/python_bin/datasets/utils/file_utils.py in _retry(func, func_args, func_kwargs, exceptions, status_codes, max_retries, base_wait_time, max_wait_time)
328 while True:
329 try:
--> 330 return func(*func_args, **func_kwargs)
331 except exceptions as err:
332 if retry >= max_retries or (status_codes and err.response.status_code not in status_codes):
~/hf/lib/python3.7/site-packages/huggingface_hub/utils/_validators.py in _inner_fn(*args, **kwargs)
122 )
123
--> 124 return fn(*args, **kwargs)
125
126 return _inner_fn # type: ignore
TypeError: upload_file() got an unexpected keyword argument 'identical_ok'
In [2]:
```
### Expected behavior
I would have expected this to work. It's for me the most intuitive way of creating a dummy dataset.
### Environment info
```
- `datasets` version: 2.1.1.dev0
- Platform: Linux-4.19.0-22-cloud-amd64-x86_64-with-debian-10.13
- Python version: 3.7.3
- PyArrow version: 11.0.0
- Pandas version: 1.3.5
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5511/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5511/timeline | null | {
"diff_url": null,
"html_url": null,
"merged_at": null,
"patch_url": null,
"url": null
} | 2023-02-08T10:35:48Z | completed | {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
} | [] | {
"closed_at": null,
"closed_issues": null,
"created_at": null,
"creator": {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
},
"description": null,
"due_on": null,
"html_url": null,
"id": null,
"labels_url": null,
"node_id": null,
"number": null,
"open_issues": null,
"state": null,
"title": null,
"updated_at": null,
"url": null
} | false |
https://api.github.com/repos/huggingface/datasets/issues/5510 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5510/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5510/comments | https://api.github.com/repos/huggingface/datasets/issues/5510/events | https://github.com/huggingface/datasets/pull/5510 | 1,575,191,549 | PR_kwDODunzps5JehbR | 5,510 | Milvus integration for search | {
"avatar_url": "https://avatars.githubusercontent.com/u/81822489?v=4",
"events_url": "https://api.github.com/users/filip-halt/events{/privacy}",
"followers_url": "https://api.github.com/users/filip-halt/followers",
"following_url": "https://api.github.com/users/filip-halt/following{/other_user}",
"gists_url": "https://api.github.com/users/filip-halt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/filip-halt",
"id": 81822489,
"login": "filip-halt",
"node_id": "MDQ6VXNlcjgxODIyNDg5",
"organizations_url": "https://api.github.com/users/filip-halt/orgs",
"received_events_url": "https://api.github.com/users/filip-halt/received_events",
"repos_url": "https://api.github.com/users/filip-halt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/filip-halt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/filip-halt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/filip-halt"
} | [] | open | false | 1 | 2023-02-07T23:30:26Z | 2023-02-08T18:35:53Z | NONE | Signed-off-by: Filip Haltmayer <filip.haltmayer@zilliz.com> | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5510/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5510/timeline | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5510.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5510",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5510.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5510"
} | null | null | {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
} | [] | {
"closed_at": null,
"closed_issues": null,
"created_at": null,
"creator": {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
},
"description": null,
"due_on": null,
"html_url": null,
"id": null,
"labels_url": null,
"node_id": null,
"number": null,
"open_issues": null,
"state": null,
"title": null,
"updated_at": null,
"url": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5509 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5509/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5509/comments | https://api.github.com/repos/huggingface/datasets/issues/5509/events | https://github.com/huggingface/datasets/pull/5509 | 1,574,177,320 | PR_kwDODunzps5JbH-u | 5,509 | Add a static `__all__` to `__init__.py` for typecheckers | {
"avatar_url": "https://avatars.githubusercontent.com/u/14248012?v=4",
"events_url": "https://api.github.com/users/LoicGrobol/events{/privacy}",
"followers_url": "https://api.github.com/users/LoicGrobol/followers",
"following_url": "https://api.github.com/users/LoicGrobol/following{/other_user}",
"gists_url": "https://api.github.com/users/LoicGrobol/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/LoicGrobol",
"id": 14248012,
"login": "LoicGrobol",
"node_id": "MDQ6VXNlcjE0MjQ4MDEy",
"organizations_url": "https://api.github.com/users/LoicGrobol/orgs",
"received_events_url": "https://api.github.com/users/LoicGrobol/received_events",
"repos_url": "https://api.github.com/users/LoicGrobol/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/LoicGrobol/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LoicGrobol/subscriptions",
"type": "User",
"url": "https://api.github.com/users/LoicGrobol"
} | [] | open | false | 2 | 2023-02-07T11:42:40Z | 2023-02-08T17:48:24Z | NONE | This adds a static `__all__` field to `__init__.py`, allowing typecheckers to know which symbols are accessible from `datasets` at runtime. In particular [Pyright](https://github.com/microsoft/pylance-release/issues/2328#issuecomment-1029381258) seems to rely on this. At this point I have added all (modulo oversight) the symbols mentioned in the Reference part of [the docs](https://huggingface.co/docs/datasets), but that could be adjusted. As a side effect, only these symbols will be imported by `from datasets import *`, which may or may not be a good thing (and if it isn't, that's easy to fix).
Another option would be to add a pyi stub, but I think `__all__` should be the most pythonic solution.
This should fix #3841. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5509/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5509/timeline | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5509.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5509",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5509.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5509"
} | null | null | {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
} | [] | {
"closed_at": null,
"closed_issues": null,
"created_at": null,
"creator": {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
},
"description": null,
"due_on": null,
"html_url": null,
"id": null,
"labels_url": null,
"node_id": null,
"number": null,
"open_issues": null,
"state": null,
"title": null,
"updated_at": null,
"url": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5508 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5508/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5508/comments | https://api.github.com/repos/huggingface/datasets/issues/5508/events | https://github.com/huggingface/datasets/issues/5508 | 1,573,290,359 | I_kwDODunzps5dxoF3 | 5,508 | Saving a dataset after setting format to torch doesn't work, but only if filtering | {
"avatar_url": "https://avatars.githubusercontent.com/u/13984157?v=4",
"events_url": "https://api.github.com/users/joebhakim/events{/privacy}",
"followers_url": "https://api.github.com/users/joebhakim/followers",
"following_url": "https://api.github.com/users/joebhakim/following{/other_user}",
"gists_url": "https://api.github.com/users/joebhakim/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/joebhakim",
"id": 13984157,
"login": "joebhakim",
"node_id": "MDQ6VXNlcjEzOTg0MTU3",
"organizations_url": "https://api.github.com/users/joebhakim/orgs",
"received_events_url": "https://api.github.com/users/joebhakim/received_events",
"repos_url": "https://api.github.com/users/joebhakim/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/joebhakim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joebhakim/subscriptions",
"type": "User",
"url": "https://api.github.com/users/joebhakim"
} | [] | open | false | 1 | 2023-02-06T21:08:58Z | 2023-02-08T14:22:26Z | NONE | ### Describe the bug
Saving a dataset after setting format to torch doesn't work, but only if filtering
### Steps to reproduce the bug
```
a = Dataset.from_dict({"b": [1, 2]})
a.set_format('torch')
a.save_to_disk("test_save") # saves successfully
a.filter(None).save_to_disk("test_save_filter") # does not
>> [...] TypeError: Provided `function` which is applied to all elements of table returns a `dict` of types [<class 'torch.Tensor'>]. When using `batched=True`, make sure provided `function` returns a `dict` of types like `(<class 'list'>, <class 'numpy.ndarray'>)`.
# note: skipping the format change to torch lets this work.
### Expected behavior
Saving to work
### Environment info
- `datasets` version: 2.4.0
- Platform: Linux-6.1.9-arch1-1-x86_64-with-glibc2.36
- Python version: 3.10.9
- PyArrow version: 9.0.0
- Pandas version: 1.4.4 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5508/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5508/timeline | null | {
"diff_url": null,
"html_url": null,
"merged_at": null,
"patch_url": null,
"url": null
} | null | null | {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
} | [] | {
"closed_at": null,
"closed_issues": null,
"created_at": null,
"creator": {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
},
"description": null,
"due_on": null,
"html_url": null,
"id": null,
"labels_url": null,
"node_id": null,
"number": null,
"open_issues": null,
"state": null,
"title": null,
"updated_at": null,
"url": null
} | false |
https://api.github.com/repos/huggingface/datasets/issues/5507 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5507/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5507/comments | https://api.github.com/repos/huggingface/datasets/issues/5507/events | https://github.com/huggingface/datasets/issues/5507 | 1,572,667,036 | I_kwDODunzps5dvP6c | 5,507 | Optimise behaviour in respect to indices mapping | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | 0 | 2023-02-06T14:25:55Z | 2023-02-06T14:25:55Z | CONTRIBUTOR | _Originally [posted](https://huggingface.slack.com/archives/C02V51Q3800/p1675443873878489?thread_ts=1675418893.373479&cid=C02V51Q3800) on Slack_
Considering all this, perhaps for Datasets 3.0, we can do the following:
* have `continuous=True` by default in `.shard` (requested in the survey and makes more sense for us since it doesn't create an indices mapping)
* allow calling `save_to_disk` on "unflattened" datasets
* remove "hidden" expensive calls in `save_to_disk`, `unique`, `concatenate_datasets`, etc. For instance, instead of silently calling `flatten_indices` where it's needed, it's probably better to be explicit (considering how expensive these ops can be) and raise an error instead | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5507/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5507/timeline | null | {
"diff_url": null,
"html_url": null,
"merged_at": null,
"patch_url": null,
"url": null
} | null | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
] | {
"closed_at": null,
"closed_issues": null,
"created_at": null,
"creator": {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
},
"description": null,
"due_on": null,
"html_url": null,
"id": null,
"labels_url": null,
"node_id": null,
"number": null,
"open_issues": null,
"state": null,
"title": null,
"updated_at": null,
"url": null
} | false |
https://api.github.com/repos/huggingface/datasets/issues/5506 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5506/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5506/comments | https://api.github.com/repos/huggingface/datasets/issues/5506/events | https://github.com/huggingface/datasets/issues/5506 | 1,571,838,641 | I_kwDODunzps5dsFqx | 5,506 | IterableDataset and Dataset return different batch sizes when using Trainer with multiple GPUs | {
"avatar_url": "https://avatars.githubusercontent.com/u/38166299?v=4",
"events_url": "https://api.github.com/users/kheyer/events{/privacy}",
"followers_url": "https://api.github.com/users/kheyer/followers",
"following_url": "https://api.github.com/users/kheyer/following{/other_user}",
"gists_url": "https://api.github.com/users/kheyer/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kheyer",
"id": 38166299,
"login": "kheyer",
"node_id": "MDQ6VXNlcjM4MTY2Mjk5",
"organizations_url": "https://api.github.com/users/kheyer/orgs",
"received_events_url": "https://api.github.com/users/kheyer/received_events",
"repos_url": "https://api.github.com/users/kheyer/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kheyer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kheyer/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kheyer"
} | [] | closed | false | 4 | 2023-02-06T03:26:03Z | 2023-02-08T18:30:08Z | NONE | ### Describe the bug
I am training a Roberta model using 2 GPUs and the `Trainer` API with a batch size of 256.
Initially I used a standard `Dataset`, but had issues with slow data loading. After reading [this issue](https://github.com/huggingface/datasets/issues/2252), I swapped to loading my dataset as contiguous shards and passing those to an `IterableDataset`. I observed an unexpected drop in GPU memory utilization, and found the batch size returned from the model had been cut in half.
When using `Trainer` with 2 GPUs and a batch size of 256, `Dataset` returns a batch of size 512 (256 per GPU), while `IterableDataset` returns a batch size of 256 (256 total). My guess is `IterableDataset` isn't accounting for multiple cards.
### Steps to reproduce the bug
```python
import datasets
from datasets import IterableDataset
from transformers import RobertaConfig
from transformers import RobertaTokenizerFast
from transformers import RobertaForMaskedLM
from transformers import DataCollatorForLanguageModeling
from transformers import Trainer, TrainingArguments
use_iterable_dataset = True
def gen_from_shards(shards):
for shard in shards:
for example in shard:
yield example
dataset = datasets.load_from_disk('my_dataset.hf')
if use_iterable_dataset:
n_shards = 100
shards = [dataset.shard(num_shards=n_shards, index=i) for i in range(n_shards)]
dataset = IterableDataset.from_generator(gen_from_shards, gen_kwargs={"shards": shards})
tokenizer = RobertaTokenizerFast.from_pretrained("./my_tokenizer", max_len=160, use_fast=True)
config = RobertaConfig(
vocab_size=8248,
max_position_embeddings=256,
num_attention_heads=8,
num_hidden_layers=6,
type_vocab_size=1)
model = RobertaForMaskedLM(config=config)
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=True, mlm_probability=0.15)
training_args = TrainingArguments(
per_device_train_batch_size=256
# other args removed for brevity
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=dataset,
)
trainer.train()
```
### Expected behavior
Expected `Dataset` and `IterableDataset` to have the same batch size behavior. If the current behavior is intentional, the batch size printout at the start of training should be updated. Currently, both dataset classes result in `Trainer` printing the same total batch size, even though the batch size sent to the GPUs are different.
### Environment info
datasets 2.7.1
transformers 4.25.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5506/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5506/timeline | null | {
"diff_url": null,
"html_url": null,
"merged_at": null,
"patch_url": null,
"url": null
} | 2023-02-08T18:30:07Z | completed | {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
} | [] | {
"closed_at": null,
"closed_issues": null,
"created_at": null,
"creator": {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
},
"description": null,
"due_on": null,
"html_url": null,
"id": null,
"labels_url": null,
"node_id": null,
"number": null,
"open_issues": null,
"state": null,
"title": null,
"updated_at": null,
"url": null
} | false |
https://api.github.com/repos/huggingface/datasets/issues/5505 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5505/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5505/comments | https://api.github.com/repos/huggingface/datasets/issues/5505/events | https://github.com/huggingface/datasets/issues/5505 | 1,571,720,814 | I_kwDODunzps5dro5u | 5,505 | PyTorch BatchSampler still loads from Dataset one-by-one | {
"avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4",
"events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}",
"followers_url": "https://api.github.com/users/davidgilbertson/followers",
"following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}",
"gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/davidgilbertson",
"id": 4443482,
"login": "davidgilbertson",
"node_id": "MDQ6VXNlcjQ0NDM0ODI=",
"organizations_url": "https://api.github.com/users/davidgilbertson/orgs",
"received_events_url": "https://api.github.com/users/davidgilbertson/received_events",
"repos_url": "https://api.github.com/users/davidgilbertson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/davidgilbertson"
} | [] | open | false | 2 | 2023-02-06T01:14:55Z | 2023-02-07T19:37:04Z | NONE | ### Describe the bug
In [the docs here](https://huggingface.co/docs/datasets/use_with_pytorch#use-a-batchsampler), it mentions the issue of the Dataset being read one-by-one, then states that using a BatchSampler resolves the issue.
I'm not sure if this is a mistake in the docs or the code, but it seems that the only way for a Dataset to be passed a list of indexes by PyTorch (instead of one index at a time) is to define a `__getitems__` method (note the plural) on the Dataset object, and since the HF Dataset doesn't have this, PyTorch executes [this line of code](https://github.com/pytorch/pytorch/blob/master/torch/utils/data/_utils/fetch.py#L58), reverting to fetching one-by-one.
### Steps to reproduce the bug
You can put a breakpoint in `Dataset.__getitem__()` or just print the args from there and see that it's called multiple times for a single `next(iter(dataloader))`, even when using the code from the docs:
```py
from torch.utils.data.sampler import BatchSampler, RandomSampler
batch_sampler = BatchSampler(RandomSampler(ds), batch_size=32, drop_last=False)
dataloader = DataLoader(ds, batch_sampler=batch_sampler)
```
### Expected behavior
The expected behaviour would be for it to fetch batches from the dataset, rather than one-by-one.
To demonstrate that there is room for improvement: once I have a HF dataset `ds`, if I just add this line:
```py
ds.__getitems__ = ds.__getitem__
```
...then the time taken to loop over the dataset improves considerably (for wikitext-103, from one minute to 13 seconds with batch size 32). Probably not a big deal in the grand scheme of things, but seems like an easy win.
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
- Python version: 3.10.8
- PyArrow version: 10.0.1
- Pandas version: 1.5.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5505/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5505/timeline | null | {
"diff_url": null,
"html_url": null,
"merged_at": null,
"patch_url": null,
"url": null
} | null | null | {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
} | [] | {
"closed_at": null,
"closed_issues": null,
"created_at": null,
"creator": {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
},
"description": null,
"due_on": null,
"html_url": null,
"id": null,
"labels_url": null,
"node_id": null,
"number": null,
"open_issues": null,
"state": null,
"title": null,
"updated_at": null,
"url": null
} | false |
https://api.github.com/repos/huggingface/datasets/issues/5504 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5504/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5504/comments | https://api.github.com/repos/huggingface/datasets/issues/5504/events | https://github.com/huggingface/datasets/pull/5504 | 1,570,621,242 | PR_kwDODunzps5JPoWy | 5,504 | don't zero copy timestamps | {
"avatar_url": "https://avatars.githubusercontent.com/u/2512762?v=4",
"events_url": "https://api.github.com/users/dwyatte/events{/privacy}",
"followers_url": "https://api.github.com/users/dwyatte/followers",
"following_url": "https://api.github.com/users/dwyatte/following{/other_user}",
"gists_url": "https://api.github.com/users/dwyatte/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dwyatte",
"id": 2512762,
"login": "dwyatte",
"node_id": "MDQ6VXNlcjI1MTI3NjI=",
"organizations_url": "https://api.github.com/users/dwyatte/orgs",
"received_events_url": "https://api.github.com/users/dwyatte/received_events",
"repos_url": "https://api.github.com/users/dwyatte/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dwyatte/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dwyatte/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dwyatte"
} | [] | closed | false | 3 | 2023-02-03T23:39:04Z | 2023-02-08T17:28:50Z | CONTRIBUTOR | Fixes https://github.com/huggingface/datasets/issues/5495
I'm not sure whether we prefer a test here or if timestamps are known to be unsupported (like booleans). The current test at least covers the bug | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5504/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5504/timeline | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5504.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5504",
"merged_at": "2023-02-08T14:33:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5504.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5504"
} | 2023-02-08T14:33:17Z | null | {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
} | [] | {
"closed_at": null,
"closed_issues": null,
"created_at": null,
"creator": {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
},
"description": null,
"due_on": null,
"html_url": null,
"id": null,
"labels_url": null,
"node_id": null,
"number": null,
"open_issues": null,
"state": null,
"title": null,
"updated_at": null,
"url": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5502 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5502/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5502/comments | https://api.github.com/repos/huggingface/datasets/issues/5502/events | https://github.com/huggingface/datasets/pull/5502 | 1,570,091,225 | PR_kwDODunzps5JN0aX | 5,502 | Added functionality: sort datasets by multiple keys | {
"avatar_url": "https://avatars.githubusercontent.com/u/7805682?v=4",
"events_url": "https://api.github.com/users/MichlF/events{/privacy}",
"followers_url": "https://api.github.com/users/MichlF/followers",
"following_url": "https://api.github.com/users/MichlF/following{/other_user}",
"gists_url": "https://api.github.com/users/MichlF/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/MichlF",
"id": 7805682,
"login": "MichlF",
"node_id": "MDQ6VXNlcjc4MDU2ODI=",
"organizations_url": "https://api.github.com/users/MichlF/orgs",
"received_events_url": "https://api.github.com/users/MichlF/received_events",
"repos_url": "https://api.github.com/users/MichlF/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/MichlF/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MichlF/subscriptions",
"type": "User",
"url": "https://api.github.com/users/MichlF"
} | [] | open | false | 1 | 2023-02-03T16:17:00Z | 2023-02-07T17:47:18Z | NONE | Added functionality implementation: sort datasets by multiple keys/columns as discussed in https://github.com/huggingface/datasets/issues/5425. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5502/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5502/timeline | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5502.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5502",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5502.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5502"
} | null | null | {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
} | [] | {
"closed_at": null,
"closed_issues": null,
"created_at": null,
"creator": {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
},
"description": null,
"due_on": null,
"html_url": null,
"id": null,
"labels_url": null,
"node_id": null,
"number": null,
"open_issues": null,
"state": null,
"title": null,
"updated_at": null,
"url": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5501 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5501/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5501/comments | https://api.github.com/repos/huggingface/datasets/issues/5501/events | https://github.com/huggingface/datasets/pull/5501 | 1,569,644,159 | PR_kwDODunzps5JMTn8 | 5,501 | Speeding up file downloads | {
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Narsil",
"id": 204321,
"login": "Narsil",
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"repos_url": "https://api.github.com/users/Narsil/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Narsil"
} | [] | open | false | 3 | 2023-02-03T10:50:10Z | 2023-02-03T11:42:20Z | CONTRIBUTOR | Original fix: https://github.com/huggingface/huggingface_hub/pull/1267
Not sure this function is actually still called though.
I haven't done benches on this. Is there a dataset where files are hosted on the hub through cloudfront so we can have the same setup as in `hf_hub` ? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5501/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5501/timeline | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5501.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5501",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5501.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5501"
} | null | null | {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
} | [] | {
"closed_at": null,
"closed_issues": null,
"created_at": null,
"creator": {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
},
"description": null,
"due_on": null,
"html_url": null,
"id": null,
"labels_url": null,
"node_id": null,
"number": null,
"open_issues": null,
"state": null,
"title": null,
"updated_at": null,
"url": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5500 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5500/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5500/comments | https://api.github.com/repos/huggingface/datasets/issues/5500/events | https://github.com/huggingface/datasets/issues/5500 | 1,569,257,240 | I_kwDODunzps5diPcY | 5,500 | WMT19 custom download checksum error | {
"avatar_url": "https://avatars.githubusercontent.com/u/38466901?v=4",
"events_url": "https://api.github.com/users/Hannibal046/events{/privacy}",
"followers_url": "https://api.github.com/users/Hannibal046/followers",
"following_url": "https://api.github.com/users/Hannibal046/following{/other_user}",
"gists_url": "https://api.github.com/users/Hannibal046/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Hannibal046",
"id": 38466901,
"login": "Hannibal046",
"node_id": "MDQ6VXNlcjM4NDY2OTAx",
"organizations_url": "https://api.github.com/users/Hannibal046/orgs",
"received_events_url": "https://api.github.com/users/Hannibal046/received_events",
"repos_url": "https://api.github.com/users/Hannibal046/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Hannibal046/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hannibal046/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Hannibal046"
} | [] | closed | false | 1 | 2023-02-03T05:45:37Z | 2023-02-03T05:52:56Z | NONE | ### Describe the bug
I use the following scripts to download data from WMT19:
```python
import datasets
from datasets import inspect_dataset, load_dataset_builder
from wmt19.wmt_utils import _TRAIN_SUBSETS,_DEV_SUBSETS
## this is a must due to: https://discuss.huggingface.co/t/load-dataset-hangs-with-local-files/28034/3
if __name__ == '__main__':
dev_subsets,train_subsets = [],[]
for subset in _TRAIN_SUBSETS:
if subset.target=='en' and 'de' in subset.sources:
train_subsets.append(subset.name)
for subset in _DEV_SUBSETS:
if subset.target=='en' and 'de' in subset.sources:
dev_subsets.append(subset.name)
inspect_dataset("wmt19", "./wmt19")
builder = load_dataset_builder(
"./wmt19/wmt_utils.py",
language_pair=("de", "en"),
subsets={
datasets.Split.TRAIN: train_subsets,
datasets.Split.VALIDATION: dev_subsets,
},
)
builder.download_and_prepare()
ds = builder.as_dataset()
ds.to_json("../data/wmt19/ende/data.json")
```
And I got the following error:
```
Traceback (most recent call last): | 0/2 [00:00<?, ?obj/s]
File "draft.py", line 26, in <module>
builder.download_and_prepare() | 0/1 [00:00<?, ?obj/s]
File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/builder.py", line 605, in download_and_prepare
self._download_and_prepare(%| | 0/1 [00:00<?, ?obj/s]
File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/builder.py", line 1104, in _download_and_prepare
super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) | 0/1 [00:00<?, ?obj/s]
File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/builder.py", line 676, in _download_and_prepare
verify_checksums(s #13: 0%| | 0/1 [00:00<?, ?obj/s]
File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 35, in verify_checksums
raise UnexpectedDownloadedFile(str(set(recorded_checksums) - set(expected_checksums))) | 0/1 [00:00<?, ?obj/s]
datasets.utils.info_utils.UnexpectedDownloadedFile: {'https://s3.amazonaws.com/web-language-models/paracrawl/release1/paracrawl-release1.en-de.zipporah0-dedup-clean.tgz', 'https://huggingface.co/datasets/wmt/wmt13/resolve/main-zip/training-parallel-europarl-v7.zip', 'https://huggingface.co/datasets/wmt/wmt18/resolve/main-zip/translation-task/rapid2016.zip', 'https://huggingface.co/datasets/wmt/wmt18/resolve/main-zip/translation-task/training-parallel-nc-v13.zip', 'https://huggingface.co/datasets/wmt/wmt17/resolve/main-zip/translation-task/training-parallel-nc-v12.zip', 'https://huggingface.co/datasets/wmt/wmt14/resolve/main-zip/training-parallel-nc-v9.zip', 'https://huggingface.co/datasets/wmt/wmt15/resolve/main-zip/training-parallel-nc-v10.zip', 'https://huggingface.co/datasets/wmt/wmt16/resolve/main-zip/translation-task/training-parallel-nc-v11.zip'}
```
### Steps to reproduce the bug
see above
### Expected behavior
download data successfully
### Environment info
datasets==2.1.0
python==3.8
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5500/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5500/timeline | null | {
"diff_url": null,
"html_url": null,
"merged_at": null,
"patch_url": null,
"url": null
} | 2023-02-03T05:52:56Z | completed | {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
} | [] | {
"closed_at": null,
"closed_issues": null,
"created_at": null,
"creator": {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
},
"description": null,
"due_on": null,
"html_url": null,
"id": null,
"labels_url": null,
"node_id": null,
"number": null,
"open_issues": null,
"state": null,
"title": null,
"updated_at": null,
"url": null
} | false |
https://api.github.com/repos/huggingface/datasets/issues/5499 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5499/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5499/comments | https://api.github.com/repos/huggingface/datasets/issues/5499/events | https://github.com/huggingface/datasets/issues/5499 | 1,568,937,026 | I_kwDODunzps5dhBRC | 5,499 | `load_dataset` has ~4 seconds of overhead for cached data | {
"avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4",
"events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}",
"followers_url": "https://api.github.com/users/davidgilbertson/followers",
"following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}",
"gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/davidgilbertson",
"id": 4443482,
"login": "davidgilbertson",
"node_id": "MDQ6VXNlcjQ0NDM0ODI=",
"organizations_url": "https://api.github.com/users/davidgilbertson/orgs",
"received_events_url": "https://api.github.com/users/davidgilbertson/received_events",
"repos_url": "https://api.github.com/users/davidgilbertson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/davidgilbertson"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | 2 | 2023-02-02T23:34:50Z | 2023-02-07T19:35:11Z | NONE | ### Feature request
When loading a dataset that has been cached locally, the `load_dataset` function takes a lot longer than it should take to fetch the dataset from disk (or memory).
This is particularly noticeable for smaller datasets. For example, wikitext-2, comparing `load_data` (once cached) and `load_from_disk`, the `load_dataset` method takes 40 times longer.
⏱ 4.84s ⮜ load_dataset
⏱ 119ms ⮜ load_from_disk
### Motivation
I assume this is doing something like checking for a newer version.
If so, that's an age old problem: do you make the user wait _every single time they load from cache_ or do you do something like load from cache always, _then_ check for a newer version and alert if they have stale data. The decision usually revolves around what percentage of the time the data will have been updated, and how dangerous old data is.
For most datasets it's extremely unlikely that there will be a newer version on any given run, so 99% of the time this is just wasted time.
Maybe you don't want to make that decision for all users, but at least having the _option_ to not wait for checks would be an improvement.
### Your contribution
. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5499/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5499/timeline | null | {
"diff_url": null,
"html_url": null,
"merged_at": null,
"patch_url": null,
"url": null
} | null | null | {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
} | [] | {
"closed_at": null,
"closed_issues": null,
"created_at": null,
"creator": {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
},
"description": null,
"due_on": null,
"html_url": null,
"id": null,
"labels_url": null,
"node_id": null,
"number": null,
"open_issues": null,
"state": null,
"title": null,
"updated_at": null,
"url": null
} | false |
https://api.github.com/repos/huggingface/datasets/issues/5498 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5498/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5498/comments | https://api.github.com/repos/huggingface/datasets/issues/5498/events | https://github.com/huggingface/datasets/issues/5498 | 1,568,190,529 | I_kwDODunzps5deLBB | 5,498 | TypeError: 'bool' object is not iterable when filtering a datasets.arrow_dataset.Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/91255010?v=4",
"events_url": "https://api.github.com/users/vmuel/events{/privacy}",
"followers_url": "https://api.github.com/users/vmuel/followers",
"following_url": "https://api.github.com/users/vmuel/following{/other_user}",
"gists_url": "https://api.github.com/users/vmuel/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vmuel",
"id": 91255010,
"login": "vmuel",
"node_id": "MDQ6VXNlcjkxMjU1MDEw",
"organizations_url": "https://api.github.com/users/vmuel/orgs",
"received_events_url": "https://api.github.com/users/vmuel/received_events",
"repos_url": "https://api.github.com/users/vmuel/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vmuel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vmuel/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vmuel"
} | [] | closed | false | 2 | 2023-02-02T14:46:49Z | 2023-02-04T17:19:37Z | NONE | ### Describe the bug
Hi,
Thanks for the amazing work on the library!
**Describe the bug**
I think I might have noticed a small bug in the filter method.
Having loaded a dataset using `load_dataset`, when I try to filter out empty entries with `batched=True`, I get a TypeError.
### Steps to reproduce the bug
```
train_dataset = train_dataset.filter(
function=lambda example: example["image"] is not None,
batched=True,
batch_size=10)
```
Error message:
```
File .../lib/python3.9/site-packages/datasets/fingerprint.py:480, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs)
476 validate_fingerprint(kwargs[fingerprint_name])
478 # Call actual function
--> 480 out = func(self, *args, **kwargs)
...
-> 5666 indices_array = [i for i, to_keep in zip(indices, mask) if to_keep]
5667 if indices_mapping is not None:
5668 indices_array = pa.array(indices_array, type=pa.uint64())
TypeError: 'bool' object is not iterable
```
**Removing batched=True allows to bypass the issue.**
### Expected behavior
According to the doc, "[batch_size corresponds to the] number of examples per batch provided to function if batched = True", so we shouldn't need to remove the batchd=True arg?
source: https://huggingface.co/docs/datasets/v2.9.0/en/package_reference/main_classes#datasets.Dataset.filter
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-5.4.0-122-generic-x86_64-with-glibc2.31
- Python version: 3.9.10
- PyArrow version: 10.0.1
- Pandas version: 1.5.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5498/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5498/timeline | null | {
"diff_url": null,
"html_url": null,
"merged_at": null,
"patch_url": null,
"url": null
} | 2023-02-04T17:19:36Z | completed | {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
} | [] | {
"closed_at": null,
"closed_issues": null,
"created_at": null,
"creator": {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
},
"description": null,
"due_on": null,
"html_url": null,
"id": null,
"labels_url": null,
"node_id": null,
"number": null,
"open_issues": null,
"state": null,
"title": null,
"updated_at": null,
"url": null
} | false |
https://api.github.com/repos/huggingface/datasets/issues/5497 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5497/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5497/comments | https://api.github.com/repos/huggingface/datasets/issues/5497/events | https://github.com/huggingface/datasets/pull/5497 | 1,567,601,264 | PR_kwDODunzps5JFhvc | 5,497 | Improved error message for gated/private repos | {
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/osanseviero",
"id": 7246357,
"login": "osanseviero",
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"type": "User",
"url": "https://api.github.com/users/osanseviero"
} | [] | closed | false | 3 | 2023-02-02T08:56:15Z | 2023-02-02T11:26:08Z | MEMBER | Using `use_auth_token=True` is not needed anymore. If a user logged in, the token will be automatically retrieved. Also include a mention for gated repos
See https://github.com/huggingface/huggingface_hub/pull/1064 | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5497/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5497/timeline | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5497.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5497",
"merged_at": "2023-02-02T11:17:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5497.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5497"
} | 2023-02-02T11:17:15Z | null | {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
} | [] | {
"closed_at": null,
"closed_issues": null,
"created_at": null,
"creator": {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
},
"description": null,
"due_on": null,
"html_url": null,
"id": null,
"labels_url": null,
"node_id": null,
"number": null,
"open_issues": null,
"state": null,
"title": null,
"updated_at": null,
"url": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5496 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5496/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5496/comments | https://api.github.com/repos/huggingface/datasets/issues/5496/events | https://github.com/huggingface/datasets/issues/5496 | 1,567,301,765 | I_kwDODunzps5dayCF | 5,496 | Add a `reduce` method | {
"avatar_url": "https://avatars.githubusercontent.com/u/59542043?v=4",
"events_url": "https://api.github.com/users/zhangir-azerbayev/events{/privacy}",
"followers_url": "https://api.github.com/users/zhangir-azerbayev/followers",
"following_url": "https://api.github.com/users/zhangir-azerbayev/following{/other_user}",
"gists_url": "https://api.github.com/users/zhangir-azerbayev/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zhangir-azerbayev",
"id": 59542043,
"login": "zhangir-azerbayev",
"node_id": "MDQ6VXNlcjU5NTQyMDQz",
"organizations_url": "https://api.github.com/users/zhangir-azerbayev/orgs",
"received_events_url": "https://api.github.com/users/zhangir-azerbayev/received_events",
"repos_url": "https://api.github.com/users/zhangir-azerbayev/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zhangir-azerbayev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhangir-azerbayev/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zhangir-azerbayev"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | 1 | 2023-02-02T04:30:22Z | 2023-02-03T14:11:32Z | NONE | ### Feature request
Right now the `Dataset` class implements `map()` and `filter()`, but leaves out the third functional idiom popular among Python users: `reduce`.
### Motivation
A `reduce` method is often useful when calculating dataset statistics, for example, the occurrence of a particular n-gram or the average line length of a code dataset.
### Your contribution
I haven't contributed to `datasets` before, but I don't expect this will be too difficult, since the implementation will closely follow that of `map` and `filter`. I could have a crack over the weekend. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5496/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5496/timeline | null | {
"diff_url": null,
"html_url": null,
"merged_at": null,
"patch_url": null,
"url": null
} | null | null | {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
} | [] | {
"closed_at": null,
"closed_issues": null,
"created_at": null,
"creator": {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
},
"description": null,
"due_on": null,
"html_url": null,
"id": null,
"labels_url": null,
"node_id": null,
"number": null,
"open_issues": null,
"state": null,
"title": null,
"updated_at": null,
"url": null
} | false |
https://api.github.com/repos/huggingface/datasets/issues/5495 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5495/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5495/comments | https://api.github.com/repos/huggingface/datasets/issues/5495/events | https://github.com/huggingface/datasets/issues/5495 | 1,566,803,452 | I_kwDODunzps5dY4X8 | 5,495 | to_tf_dataset fails with datetime UTC columns even if not included in columns argument | {
"avatar_url": "https://avatars.githubusercontent.com/u/2512762?v=4",
"events_url": "https://api.github.com/users/dwyatte/events{/privacy}",
"followers_url": "https://api.github.com/users/dwyatte/followers",
"following_url": "https://api.github.com/users/dwyatte/following{/other_user}",
"gists_url": "https://api.github.com/users/dwyatte/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dwyatte",
"id": 2512762,
"login": "dwyatte",
"node_id": "MDQ6VXNlcjI1MTI3NjI=",
"organizations_url": "https://api.github.com/users/dwyatte/orgs",
"received_events_url": "https://api.github.com/users/dwyatte/received_events",
"repos_url": "https://api.github.com/users/dwyatte/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dwyatte/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dwyatte/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dwyatte"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] | closed | false | 2 | 2023-02-01T20:47:33Z | 2023-02-08T14:33:19Z | CONTRIBUTOR | ### Describe the bug
There appears to be some eager behavior in `to_tf_dataset` that runs against every column in a dataset even if they aren't included in the columns argument. This is problematic with datetime UTC columns due to them not working with zero copy. If I don't have UTC information in my datetime column, then everything works as expected.
### Steps to reproduce the bug
```python
import numpy as np
import pandas as pd
from datasets import Dataset
df = pd.DataFrame(np.random.rand(2, 1), columns=["x"])
# df["dt"] = pd.to_datetime(["2023-01-01", "2023-01-01"]) # works fine
df["dt"] = pd.to_datetime(["2023-01-01 00:00:00.00000+00:00", "2023-01-01 00:00:00.00000+00:00"])
df.to_parquet("test.pq")
ds = Dataset.from_parquet("test.pq")
tf_ds = ds.to_tf_dataset(columns=["x"], batch_size=2, shuffle=True)
```
```
ArrowInvalid Traceback (most recent call last)
Cell In[1], line 12
8 df.to_parquet("test.pq")
11 ds = Dataset.from_parquet("test.pq")
---> 12 tf_ds = ds.to_tf_dataset(columns=["r"], batch_size=2, shuffle=True)
File ~/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:411, in TensorflowDatasetMixin.to_tf_dataset(self, batch_size, columns, shuffle, collate_fn, drop_remainder, collate_fn_args, label_cols, prefetch, num_workers)
407 dataset = self
409 # TODO(Matt, QL): deprecate the retention of label_ids and label
--> 411 output_signature, columns_to_np_types = dataset._get_output_signature(
412 dataset,
413 collate_fn=collate_fn,
414 collate_fn_args=collate_fn_args,
415 cols_to_retain=cols_to_retain,
416 batch_size=batch_size if drop_remainder else None,
417 )
419 if "labels" in output_signature:
420 if ("label_ids" in columns or "label" in columns) and "labels" not in columns:
File ~/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:254, in TensorflowDatasetMixin._get_output_signature(dataset, collate_fn, collate_fn_args, cols_to_retain, batch_size, num_test_batches)
252 for _ in range(num_test_batches):
253 indices = sample(range(len(dataset)), test_batch_size)
--> 254 test_batch = dataset[indices]
255 if cols_to_retain is not None:
256 test_batch = {key: value for key, value in test_batch.items() if key in cols_to_retain}
File ~/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:2590, in Dataset.__getitem__(self, key)
2588 def __getitem__(self, key): # noqa: F811
2589 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
-> 2590 return self._getitem(
2591 key,
2592 )
File ~/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:2575, in Dataset._getitem(self, key, **kwargs)
2573 formatter = get_formatter(format_type, features=self.features, **format_kwargs)
2574 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
-> 2575 formatted_output = format_table(
2576 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
2577 )
2578 return formatted_output
File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:634, in format_table(table, key, formatter, format_columns, output_all_columns)
632 python_formatter = PythonFormatter(features=None)
633 if format_columns is None:
--> 634 return formatter(pa_table, query_type=query_type)
635 elif query_type == "column":
636 if key in format_columns:
File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:410, in Formatter.__call__(self, pa_table, query_type)
408 return self.format_column(pa_table)
409 elif query_type == "batch":
--> 410 return self.format_batch(pa_table)
File ~/venv/lib/python3.8/site-packages/datasets/formatting/np_formatter.py:78, in NumpyFormatter.format_batch(self, pa_table)
77 def format_batch(self, pa_table: pa.Table) -> Mapping:
---> 78 batch = self.numpy_arrow_extractor().extract_batch(pa_table)
79 batch = self.python_features_decoder.decode_batch(batch)
80 batch = self.recursive_tensorize(batch)
File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:164, in NumpyArrowExtractor.extract_batch(self, pa_table)
163 def extract_batch(self, pa_table: pa.Table) -> dict:
--> 164 return {col: self._arrow_array_to_numpy(pa_table[col]) for col in pa_table.column_names}
File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:164, in <dictcomp>(.0)
163 def extract_batch(self, pa_table: pa.Table) -> dict:
--> 164 return {col: self._arrow_array_to_numpy(pa_table[col]) for col in pa_table.column_names}
File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:185, in NumpyArrowExtractor._arrow_array_to_numpy(self, pa_array)
181 else:
182 zero_copy_only = _is_zero_copy_only(pa_array.type) and all(
183 not _is_array_with_nulls(chunk) for chunk in pa_array.chunks
184 )
--> 185 array: List = [
186 row for chunk in pa_array.chunks for row in chunk.to_numpy(zero_copy_only=zero_copy_only)
187 ]
188 else:
189 if isinstance(pa_array.type, _ArrayXDExtensionType):
190 # don't call to_pylist() to preserve dtype of the fixed-size array
File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:186, in <listcomp>(.0)
181 else:
182 zero_copy_only = _is_zero_copy_only(pa_array.type) and all(
183 not _is_array_with_nulls(chunk) for chunk in pa_array.chunks
184 )
185 array: List = [
--> 186 row for chunk in pa_array.chunks for row in chunk.to_numpy(zero_copy_only=zero_copy_only)
187 ]
188 else:
189 if isinstance(pa_array.type, _ArrayXDExtensionType):
190 # don't call to_pylist() to preserve dtype of the fixed-size array
File ~/venv/lib/python3.8/site-packages/pyarrow/array.pxi:1475, in pyarrow.lib.Array.to_numpy()
File ~/venv/lib/python3.8/site-packages/pyarrow/error.pxi:100, in pyarrow.lib.check_status()
ArrowInvalid: Needed to copy 1 chunks with 0 nulls, but zero_copy_only was True
```
### Expected behavior
I think there are two potential issues/fixes
1. Proper handling of datetime UTC columns (perhaps there is something incorrect with zero copy handling here)
2. Not eagerly running against every column in a dataset when the columns argument of `to_tf_dataset` specifies a subset of columns (although I'm not sure if this is unavoidable)
### Environment info
- `datasets` version: 2.9.0
- Platform: macOS-13.2-x86_64-i386-64bit
- Python version: 3.8.12
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5495/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5495/timeline | null | {
"diff_url": null,
"html_url": null,
"merged_at": null,
"patch_url": null,
"url": null
} | 2023-02-08T14:33:19Z | completed | {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
} | [] | {
"closed_at": null,
"closed_issues": null,
"created_at": null,
"creator": {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
},
"description": null,
"due_on": null,
"html_url": null,
"id": null,
"labels_url": null,
"node_id": null,
"number": null,
"open_issues": null,
"state": null,
"title": null,
"updated_at": null,
"url": null
} | false |
https://api.github.com/repos/huggingface/datasets/issues/5494 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5494/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5494/comments | https://api.github.com/repos/huggingface/datasets/issues/5494/events | https://github.com/huggingface/datasets/issues/5494 | 1,566,655,348 | I_kwDODunzps5dYUN0 | 5,494 | Update audio installation doc page | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | open | false | 3 | 2023-02-01T19:07:50Z | 2023-02-02T13:11:58Z | CONTRIBUTOR | Our [installation documentation page](https://huggingface.co/docs/datasets/installation#audio) says that one can use Datasets for mp3 only with `torchaudio<0.12`. `torchaudio>0.12` is actually supported too but requires a specific version of ffmpeg which is not easily installed on all linux versions but there is a custom ubuntu repo for it, we have insctructions in the code: https://github.com/huggingface/datasets/blob/main/src/datasets/features/audio.py#L327
So we should update the doc page. But first investigate [this issue](5488). | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5494/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5494/timeline | null | {
"diff_url": null,
"html_url": null,
"merged_at": null,
"patch_url": null,
"url": null
} | null | null | {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
} | [] | {
"closed_at": null,
"closed_issues": null,
"created_at": null,
"creator": {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
},
"description": null,
"due_on": null,
"html_url": null,
"id": null,
"labels_url": null,
"node_id": null,
"number": null,
"open_issues": null,
"state": null,
"title": null,
"updated_at": null,
"url": null
} | false |
https://api.github.com/repos/huggingface/datasets/issues/5493 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5493/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5493/comments | https://api.github.com/repos/huggingface/datasets/issues/5493/events | https://github.com/huggingface/datasets/pull/5493 | 1,566,637,806 | PR_kwDODunzps5JCSAZ | 5,493 | Remove unused `load_from_cache_file` arg from `Dataset.shard()` docstring | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | [] | closed | false | 3 | 2023-02-01T18:57:48Z | 2023-02-08T15:10:46Z | CONTRIBUTOR | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5493/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5493/timeline | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5493.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5493",
"merged_at": "2023-02-08T15:03:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5493.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5493"
} | 2023-02-08T15:03:50Z | null | {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
} | [] | {
"closed_at": null,
"closed_issues": null,
"created_at": null,
"creator": {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
},
"description": null,
"due_on": null,
"html_url": null,
"id": null,
"labels_url": null,
"node_id": null,
"number": null,
"open_issues": null,
"state": null,
"title": null,
"updated_at": null,
"url": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5492 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5492/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5492/comments | https://api.github.com/repos/huggingface/datasets/issues/5492/events | https://github.com/huggingface/datasets/issues/5492 | 1,566,604,216 | I_kwDODunzps5dYHu4 | 5,492 | Push_to_hub in a pull request | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] | open | false | 1 | 2023-02-01T18:32:14Z | 2023-02-01T18:40:46Z | MEMBER | Right now `ds.push_to_hub()` can push a dataset on `main` or on a new branch with `branch=`, but there is no way to open a pull request. Even passing `branch=refs/pr/x` doesn't seem to work: it tries to create a branch with that name
cc @nateraw
It should be possible to tweak the use of `huggingface_hub` in `push_to_hub` to make it open a PR or push to an existing PR | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5492/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5492/timeline | null | {
"diff_url": null,
"html_url": null,
"merged_at": null,
"patch_url": null,
"url": null
} | null | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nateraw",
"id": 32437151,
"login": "nateraw",
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"repos_url": "https://api.github.com/users/nateraw/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nateraw"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nateraw",
"id": 32437151,
"login": "nateraw",
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"repos_url": "https://api.github.com/users/nateraw/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nateraw"
}
] | {
"closed_at": null,
"closed_issues": null,
"created_at": null,
"creator": {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
},
"description": null,
"due_on": null,
"html_url": null,
"id": null,
"labels_url": null,
"node_id": null,
"number": null,
"open_issues": null,
"state": null,
"title": null,
"updated_at": null,
"url": null
} | false |
https://api.github.com/repos/huggingface/datasets/issues/5491 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5491/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5491/comments | https://api.github.com/repos/huggingface/datasets/issues/5491/events | https://github.com/huggingface/datasets/pull/5491 | 1,566,235,012 | PR_kwDODunzps5JA9OD | 5,491 | [MINOR] Typo | {
"avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4",
"events_url": "https://api.github.com/users/cakiki/events{/privacy}",
"followers_url": "https://api.github.com/users/cakiki/followers",
"following_url": "https://api.github.com/users/cakiki/following{/other_user}",
"gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cakiki",
"id": 3664563,
"login": "cakiki",
"node_id": "MDQ6VXNlcjM2NjQ1NjM=",
"organizations_url": "https://api.github.com/users/cakiki/orgs",
"received_events_url": "https://api.github.com/users/cakiki/received_events",
"repos_url": "https://api.github.com/users/cakiki/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cakiki/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cakiki"
} | [] | closed | false | 2 | 2023-02-01T14:39:39Z | 2023-02-02T07:42:28Z | CONTRIBUTOR | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5491/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5491/timeline | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5491.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5491",
"merged_at": "2023-02-02T07:35:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5491.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5491"
} | 2023-02-02T07:35:14Z | null | {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
} | [] | {
"closed_at": null,
"closed_issues": null,
"created_at": null,
"creator": {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
},
"description": null,
"due_on": null,
"html_url": null,
"id": null,
"labels_url": null,
"node_id": null,
"number": null,
"open_issues": null,
"state": null,
"title": null,
"updated_at": null,
"url": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5490 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5490/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5490/comments | https://api.github.com/repos/huggingface/datasets/issues/5490/events | https://github.com/huggingface/datasets/pull/5490 | 1,565,842,327 | PR_kwDODunzps5I_nz- | 5,490 | Do not add index column by default when exporting to CSV | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | open | false | 1 | 2023-02-01T10:20:55Z | 2023-02-01T10:25:21Z | MEMBER | As pointed out by @merveenoyan, default behavior of `Dataset.to_csv` adds the index as an additional column without name.
This PR changes the default behavior, so that now the index column is not written.
To add the index column, now you need to pass `index=True` and also `index_label=<name of the index colum>` to name that column.
CC: @merveenoyan | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5490/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5490/timeline | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5490.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5490",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5490.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5490"
} | null | null | {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
} | [] | {
"closed_at": null,
"closed_issues": null,
"created_at": null,
"creator": {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
},
"description": null,
"due_on": null,
"html_url": null,
"id": null,
"labels_url": null,
"node_id": null,
"number": null,
"open_issues": null,
"state": null,
"title": null,
"updated_at": null,
"url": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5489 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5489/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5489/comments | https://api.github.com/repos/huggingface/datasets/issues/5489/events | https://github.com/huggingface/datasets/pull/5489 | 1,565,761,705 | PR_kwDODunzps5I_WPH | 5,489 | Pin dill lower version | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | 2 | 2023-02-01T09:33:42Z | 2023-02-02T07:48:09Z | MEMBER | Pin `dill` lower version compatible with `datasets`.
Related to:
- #5487
- #288
Note that the required `dill._dill` module was introduced in dill-2.8.0, however we have heuristically tested that datasets can only be installed with dill>=3.0.0 (otherwise pip hangs indefinitely while preparing metadata for multiprocess-0.70.7)
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5489/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5489/timeline | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5489.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5489",
"merged_at": "2023-02-02T07:40:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5489.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5489"
} | 2023-02-02T07:40:43Z | null | {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
} | [] | {
"closed_at": null,
"closed_issues": null,
"created_at": null,
"creator": {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
},
"description": null,
"due_on": null,
"html_url": null,
"id": null,
"labels_url": null,
"node_id": null,
"number": null,
"open_issues": null,
"state": null,
"title": null,
"updated_at": null,
"url": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5488 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5488/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5488/comments | https://api.github.com/repos/huggingface/datasets/issues/5488/events | https://github.com/huggingface/datasets/issues/5488 | 1,565,025,262 | I_kwDODunzps5dSGPu | 5,488 | Error loading MP3 files from CommonVoice | {
"avatar_url": "https://avatars.githubusercontent.com/u/110259722?v=4",
"events_url": "https://api.github.com/users/kradonneoh/events{/privacy}",
"followers_url": "https://api.github.com/users/kradonneoh/followers",
"following_url": "https://api.github.com/users/kradonneoh/following{/other_user}",
"gists_url": "https://api.github.com/users/kradonneoh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kradonneoh",
"id": 110259722,
"login": "kradonneoh",
"node_id": "U_kgDOBpJuCg",
"organizations_url": "https://api.github.com/users/kradonneoh/orgs",
"received_events_url": "https://api.github.com/users/kradonneoh/received_events",
"repos_url": "https://api.github.com/users/kradonneoh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kradonneoh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kradonneoh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kradonneoh"
} | [] | open | false | 3 | 2023-01-31T21:25:33Z | 2023-02-01T15:28:56Z | NONE | ### Describe the bug
When loading a CommonVoice dataset with `datasets==2.9.0` and `torchaudio>=0.12.0`, I get an error reading the audio arrays:
```python
---------------------------------------------------------------------------
LibsndfileError Traceback (most recent call last)
~/.local/lib/python3.8/site-packages/datasets/features/audio.py in _decode_mp3(self, path_or_file)
310 try: # try torchaudio anyway because sometimes it works (depending on the os and os packages installed)
--> 311 array, sampling_rate = self._decode_mp3_torchaudio(path_or_file)
312 except RuntimeError:
~/.local/lib/python3.8/site-packages/datasets/features/audio.py in _decode_mp3_torchaudio(self, path_or_file)
351
--> 352 array, sampling_rate = torchaudio.load(path_or_file, format="mp3")
353 if self.sampling_rate and self.sampling_rate != sampling_rate:
~/.local/lib/python3.8/site-packages/torchaudio/backend/soundfile_backend.py in load(filepath, frame_offset, num_frames, normalize, channels_first, format)
204 """
--> 205 with soundfile.SoundFile(filepath, "r") as file_:
206 if file_.format != "WAV" or normalize:
~/.local/lib/python3.8/site-packages/soundfile.py in __init__(self, file, mode, samplerate, channels, subtype, endian, format, closefd)
654 format, subtype, endian)
--> 655 self._file = self._open(file, mode_int, closefd)
656 if set(mode).issuperset('r+') and self.seekable():
~/.local/lib/python3.8/site-packages/soundfile.py in _open(self, file, mode_int, closefd)
1212 err = _snd.sf_error(file_ptr)
-> 1213 raise LibsndfileError(err, prefix="Error opening {0!r}: ".format(self.name))
1214 if mode_int == _snd.SFM_WRITE:
LibsndfileError: Error opening <_io.BytesIO object at 0x7fa539462090>: File contains data in an unknown format.
```
I assume this is because there's some issue with the mp3 decoding process. I've verified that I have `ffmpeg>=4` (on a Linux distro), which appears to be the fallback backend for `torchaudio,` (at least according to #4889).
### Steps to reproduce the bug
```python
dataset = load_dataset("mozilla-foundation/common_voice_11_0", "be", split="train")
dataset[0]
```
### Expected behavior
Similar behavior to `torchaudio<0.12.0`, which doesn't result in a `LibsndfileError`
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 10.0.1
- Pandas version: 1.5.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5488/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5488/timeline | null | {
"diff_url": null,
"html_url": null,
"merged_at": null,
"patch_url": null,
"url": null
} | null | null | {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
} | [] | {
"closed_at": null,
"closed_issues": null,
"created_at": null,
"creator": {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
},
"description": null,
"due_on": null,
"html_url": null,
"id": null,
"labels_url": null,
"node_id": null,
"number": null,
"open_issues": null,
"state": null,
"title": null,
"updated_at": null,
"url": null
} | false |
https://api.github.com/repos/huggingface/datasets/issues/5487 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5487/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5487/comments | https://api.github.com/repos/huggingface/datasets/issues/5487/events | https://github.com/huggingface/datasets/issues/5487 | 1,564,480,121 | I_kwDODunzps5dQBJ5 | 5,487 | Incorrect filepath for dill module | {
"avatar_url": "https://avatars.githubusercontent.com/u/35349273?v=4",
"events_url": "https://api.github.com/users/avivbrokman/events{/privacy}",
"followers_url": "https://api.github.com/users/avivbrokman/followers",
"following_url": "https://api.github.com/users/avivbrokman/following{/other_user}",
"gists_url": "https://api.github.com/users/avivbrokman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/avivbrokman",
"id": 35349273,
"login": "avivbrokman",
"node_id": "MDQ6VXNlcjM1MzQ5Mjcz",
"organizations_url": "https://api.github.com/users/avivbrokman/orgs",
"received_events_url": "https://api.github.com/users/avivbrokman/received_events",
"repos_url": "https://api.github.com/users/avivbrokman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/avivbrokman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avivbrokman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/avivbrokman"
} | [] | open | false | 5 | 2023-01-31T15:01:08Z | 2023-02-02T07:07:55Z | NONE | ### Describe the bug
I installed the `datasets` package and when I try to `import` it, I get the following error:
```
Traceback (most recent call last):
File "/var/folders/jt/zw5g74ln6tqfdzsl8tx378j00000gn/T/ipykernel_3805/3458380017.py", line 1, in <module>
import datasets
File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/__init__.py", line 43, in <module>
from .arrow_dataset import Dataset
File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 66, in <module>
from .arrow_writer import ArrowWriter, OptimizedTypedSequence
File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/arrow_writer.py", line 27, in <module>
from .features import Features, Image, Value
File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/features/__init__.py", line 17, in <module>
from .audio import Audio
File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/features/audio.py", line 12, in <module>
from ..download.streaming_download_manager import xopen
File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/download/__init__.py", line 9, in <module>
from .download_manager import DownloadManager, DownloadMode
File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/download/download_manager.py", line 36, in <module>
from ..utils.py_utils import NestedDataStructure, map_nested, size_str
File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 602, in <module>
class Pickler(dill.Pickler):
File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 605, in Pickler
dispatch = dill._dill.MetaCatchingDict(dill.Pickler.dispatch.copy())
AttributeError: module 'dill' has no attribute '_dill'
```
Looking at the github source code for dill, it appears that `datasets` has a bug or is not compatible with the latest `dill`. Specifically, rather than `dill._dill.XXXX` it should be `dill.dill._dill.XXXX`. But given the popularity of `datasets` I feel confused about me being the first person to have this issue, so it makes me wonder if I'm misdiagnosing the issue.
### Steps to reproduce the bug
Install `dill` and `datasets` packages and then `import datasets`
### Expected behavior
I expect `datasets` to import.
### Environment info
- `datasets` version: 2.9.0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.9.13
- PyArrow version: 11.0.0
- Pandas version: 1.4.4 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5487/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5487/timeline | null | {
"diff_url": null,
"html_url": null,
"merged_at": null,
"patch_url": null,
"url": null
} | null | null | {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
} | [] | {
"closed_at": null,
"closed_issues": null,
"created_at": null,
"creator": {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
},
"description": null,
"due_on": null,
"html_url": null,
"id": null,
"labels_url": null,
"node_id": null,
"number": null,
"open_issues": null,
"state": null,
"title": null,
"updated_at": null,
"url": null
} | false |
https://api.github.com/repos/huggingface/datasets/issues/5486 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5486/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5486/comments | https://api.github.com/repos/huggingface/datasets/issues/5486/events | https://github.com/huggingface/datasets/issues/5486 | 1,564,059,749 | I_kwDODunzps5dOahl | 5,486 | Adding `sep` to TextConfig | {
"avatar_url": "https://avatars.githubusercontent.com/u/29576434?v=4",
"events_url": "https://api.github.com/users/omar-araboghli/events{/privacy}",
"followers_url": "https://api.github.com/users/omar-araboghli/followers",
"following_url": "https://api.github.com/users/omar-araboghli/following{/other_user}",
"gists_url": "https://api.github.com/users/omar-araboghli/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/omar-araboghli",
"id": 29576434,
"login": "omar-araboghli",
"node_id": "MDQ6VXNlcjI5NTc2NDM0",
"organizations_url": "https://api.github.com/users/omar-araboghli/orgs",
"received_events_url": "https://api.github.com/users/omar-araboghli/received_events",
"repos_url": "https://api.github.com/users/omar-araboghli/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/omar-araboghli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omar-araboghli/subscriptions",
"type": "User",
"url": "https://api.github.com/users/omar-araboghli"
} | [] | open | false | 2 | 2023-01-31T10:39:53Z | 2023-01-31T14:50:18Z | NONE | I have a local a `.txt` file that follows the `CONLL2003` format which I need to load using `load_script`. However, by using `sample_by='line'`, one can only split the dataset into lines without splitting each line into columns. Would it be reasonable to add a `sep` argument in combination with `sample_by='paragraph'` to parse a paragraph into an array for each column ? If so, I am happy to contribute!
## Environment
* `python 3.8.10`
* `datasets 2.9.0`
## Snippet of `train.txt`
```txt
Distribution NN O O
and NN O O
dynamics NN O O
of NN O O
electron NN O B-RP
complexes NN O I-RP
in NN O O
cyanobacterial NN O B-R
membranes NN O I-R
The NN O O
occurrence NN O O
of NN O O
prostaglandin NN O B-R
F2α NN O I-R
in NN O O
Pharbitis NN O B-R
seedlings NN O I-R
grown NN O O
under NN O O
short NN O B-P
days NN O I-P
or NN O I-P
days NN O I-P
```
## Current Behaviour
```python
# defining 4 features ['tokens', 'pos_tags', 'chunk_tags', 'ner_tags'] here would fail with `ValueError: Length of names (4) does not match length of arrays (1)`
dataset = datasets.load_dataset(path='text', features=features, data_files={'train': 'train.txt'}, sample_by='line')
dataset['train']['tokens'][0]
>>> 'Distribution\tNN\tO\tO'
```
## Expected Behaviour / Suggestion
```python
# suppose we defined 4 features ['tokens', 'pos_tags', 'chunk_tags', 'ner_tags']
dataset = datasets.load_dataset(path='text', features=features, data_files={'train': 'train.txt'}, sample_by='paragraph', sep='\t')
dataset['train']['tokens'][0]
>>> ['Distribution', 'and', 'dynamics', ... ]
dataset['train']['ner_tags'][0]
>>> ['O', 'O', 'O', ... ]
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5486/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5486/timeline | null | {
"diff_url": null,
"html_url": null,
"merged_at": null,
"patch_url": null,
"url": null
} | null | null | {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
} | [] | {
"closed_at": null,
"closed_issues": null,
"created_at": null,
"creator": {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
},
"description": null,
"due_on": null,
"html_url": null,
"id": null,
"labels_url": null,
"node_id": null,
"number": null,
"open_issues": null,
"state": null,
"title": null,
"updated_at": null,
"url": null
} | false |
https://api.github.com/repos/huggingface/datasets/issues/5485 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5485/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5485/comments | https://api.github.com/repos/huggingface/datasets/issues/5485/events | https://github.com/huggingface/datasets/pull/5485 | 1,563,002,829 | PR_kwDODunzps5I2ER2 | 5,485 | Add section in tutorial for IterableDataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stevhliu",
"id": 59462357,
"login": "stevhliu",
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stevhliu"
} | [] | closed | false | 2 | 2023-01-30T18:43:04Z | 2023-02-01T18:15:38Z | MEMBER | Introduces an `IterableDataset` and how to access it in the tutorial section. It also adds a brief next step section at the end to provide a path for users who want more explanation and a path for users who want something more practical and learn how to preprocess these dataset types. It'll complement the awesome new doc introduced in:
- #5410 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5485/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5485/timeline | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5485.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5485",
"merged_at": "2023-02-01T18:08:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5485.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5485"
} | 2023-02-01T18:08:46Z | null | {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
} | [] | {
"closed_at": null,
"closed_issues": null,
"created_at": null,
"creator": {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
},
"description": null,
"due_on": null,
"html_url": null,
"id": null,
"labels_url": null,
"node_id": null,
"number": null,
"open_issues": null,
"state": null,
"title": null,
"updated_at": null,
"url": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5484 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5484/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5484/comments | https://api.github.com/repos/huggingface/datasets/issues/5484/events | https://github.com/huggingface/datasets/pull/5484 | 1,562,877,070 | PR_kwDODunzps5I1oaq | 5,484 | Update docs for `nyu_depth_v2` dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/36858976?v=4",
"events_url": "https://api.github.com/users/awsaf49/events{/privacy}",
"followers_url": "https://api.github.com/users/awsaf49/followers",
"following_url": "https://api.github.com/users/awsaf49/following{/other_user}",
"gists_url": "https://api.github.com/users/awsaf49/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/awsaf49",
"id": 36858976,
"login": "awsaf49",
"node_id": "MDQ6VXNlcjM2ODU4OTc2",
"organizations_url": "https://api.github.com/users/awsaf49/orgs",
"received_events_url": "https://api.github.com/users/awsaf49/received_events",
"repos_url": "https://api.github.com/users/awsaf49/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/awsaf49/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/awsaf49/subscriptions",
"type": "User",
"url": "https://api.github.com/users/awsaf49"
} | [] | closed | false | 6 | 2023-01-30T17:37:08Z | 2023-02-05T14:22:10Z | CONTRIBUTOR | This PR will fix the issue mentioned in #5461.
cc: @sayakpaul @lhoestq
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5484/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5484/timeline | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5484.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5484",
"merged_at": "2023-02-05T14:15:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5484.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5484"
} | 2023-02-05T14:15:04Z | null | {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
} | [] | {
"closed_at": null,
"closed_issues": null,
"created_at": null,
"creator": {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
},
"description": null,
"due_on": null,
"html_url": null,
"id": null,
"labels_url": null,
"node_id": null,
"number": null,
"open_issues": null,
"state": null,
"title": null,
"updated_at": null,
"url": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5483 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5483/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5483/comments | https://api.github.com/repos/huggingface/datasets/issues/5483/events | https://github.com/huggingface/datasets/issues/5483 | 1,560,894,690 | I_kwDODunzps5dCVzi | 5,483 | Unable to upload dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4",
"events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}",
"followers_url": "https://api.github.com/users/yuvalkirstain/followers",
"following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}",
"gists_url": "https://api.github.com/users/yuvalkirstain/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yuvalkirstain",
"id": 57996478,
"login": "yuvalkirstain",
"node_id": "MDQ6VXNlcjU3OTk2NDc4",
"organizations_url": "https://api.github.com/users/yuvalkirstain/orgs",
"received_events_url": "https://api.github.com/users/yuvalkirstain/received_events",
"repos_url": "https://api.github.com/users/yuvalkirstain/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yuvalkirstain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuvalkirstain/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yuvalkirstain"
} | [] | closed | false | 1 | 2023-01-28T15:18:26Z | 2023-01-29T08:09:49Z | NONE | ### Describe the bug
Uploading a simple dataset ends with an exception
### Steps to reproduce the bug
I created a new conda env with python 3.10, pip installed datasets and:
```python
>>> from datasets import load_dataset, load_from_disk, Dataset
>>> d = Dataset.from_dict({"text": ["hello"] * 2})
>>> d.push_to_hub("ttt111")
/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/utils/_hf_folder.py:92: UserWarning: A token has been found in `/a/home/cc/students/cs/kirstain/.huggingface/token`. This is the old path where tokens were stored. The new location is `/home/olab/kirstain/.cache/huggingface/token` which is configurable using `HF_HOME` environment variable. Your token has been copied to this new location. You can now safely delete the old token file manually or use `huggingface-cli logout`.
warnings.warn(
Creating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 279.94ba/s]
Upload 1 LFS files: 0%| | 0/1 [00:02<?, ?it/s]
Pushing dataset shards to the dataset hub: 0%| | 0/1 [00:04<?, ?it/s]
Traceback (most recent call last):
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 264, in hf_raise_for_status
response.raise_for_status()
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/requests/models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://s3.us-east-1.amazonaws.com/lfs.huggingface.co/repos/cf/0c/cf0c5ab8a3f729e5f57a8b79a36ecea64a31126f13218591c27ed9a1c7bd9b41/ece885a4bb6bbc8c1bb51b45542b805283d74590f72cd4c45d3ba76628570386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA4N7VTDGO27GPWFUO%2F20230128%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230128T151640Z&X-Amz-Expires=900&X-Amz-Signature=89e78e9a9d70add7ed93d453334f4f93c6f29d889d46750a1f2da04af73978db&X-Amz-SignedHeaders=host&x-amz-storage-class=INTELLIGENT_TIERING&x-id=PutObject
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/_commit_api.py", line 334, in _inner_upload_lfs_object
return _upload_lfs_object(
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/_commit_api.py", line 391, in _upload_lfs_object
lfs_upload(
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/lfs.py", line 273, in lfs_upload
_upload_single_part(
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/lfs.py", line 305, in _upload_single_part
hf_raise_for_status(upload_res)
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 318, in hf_raise_for_status
raise HfHubHTTPError(str(e), response=response) from e
huggingface_hub.utils._errors.HfHubHTTPError: 403 Client Error: Forbidden for url: https://s3.us-east-1.amazonaws.com/lfs.huggingface.co/repos/cf/0c/cf0c5ab8a3f729e5f57a8b79a36ecea64a31126f13218591c27ed9a1c7bd9b41/ece885a4bb6bbc8c1bb51b45542b805283d74590f72cd4c45d3ba76628570386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA4N7VTDGO27GPWFUO%2F20230128%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230128T151640Z&X-Amz-Expires=900&X-Amz-Signature=89e78e9a9d70add7ed93d453334f4f93c6f29d889d46750a1f2da04af73978db&X-Amz-SignedHeaders=host&x-amz-storage-class=INTELLIGENT_TIERING&x-id=PutObject
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 4909, in push_to_hub
repo_id, split, uploaded_size, dataset_nbytes, repo_files, deleted_size = self._push_parquet_shards_to_hub(
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 4804, in _push_parquet_shards_to_hub
_retry(
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 281, in _retry
return func(*func_args, **func_kwargs)
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 124, in _inner_fn
return fn(*args, **kwargs)
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 2537, in upload_file
commit_info = self.create_commit(
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 124, in _inner_fn
return fn(*args, **kwargs)
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 2346, in create_commit
upload_lfs_files(
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 124, in _inner_fn
return fn(*args, **kwargs)
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/_commit_api.py", line 346, in upload_lfs_files
thread_map(
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/tqdm/contrib/concurrent.py", line 94, in thread_map
return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/tqdm/contrib/concurrent.py", line 76, in _executor_map
return list(tqdm_class(ex.map(fn, *iterables, **map_args), **kwargs))
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/concurrent/futures/_base.py", line 621, in result_iterator
yield _result_or_cancel(fs.pop())
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/concurrent/futures/_base.py", line 319, in _result_or_cancel
return fut.result(timeout)
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/_commit_api.py", line 338, in _inner_upload_lfs_object
raise RuntimeError(
RuntimeError: Error while uploading 'data/train-00000-of-00001-6df93048e66df326.parquet' to the Hub.
```
### Expected behavior
The dataset should be uploaded without any exceptions
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-4.15.0-65-generic-x86_64-with-glibc2.27
- Python version: 3.10.9
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5483/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5483/timeline | null | {
"diff_url": null,
"html_url": null,
"merged_at": null,
"patch_url": null,
"url": null
} | 2023-01-29T08:09:49Z | completed | {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
} | [] | {
"closed_at": null,
"closed_issues": null,
"created_at": null,
"creator": {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
},
"description": null,
"due_on": null,
"html_url": null,
"id": null,
"labels_url": null,
"node_id": null,
"number": null,
"open_issues": null,
"state": null,
"title": null,
"updated_at": null,
"url": null
} | false |
https://api.github.com/repos/huggingface/datasets/issues/5482 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5482/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5482/comments | https://api.github.com/repos/huggingface/datasets/issues/5482/events | https://github.com/huggingface/datasets/issues/5482 | 1,560,853,137 | I_kwDODunzps5dCLqR | 5,482 | Reload features from Parquet metadata | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "BDE59C",
"default": false,
"description": "Issues a bit more difficult than \"Good First\" issues",
"id": 3761482852,
"name": "good second issue",
"node_id": "LA_kwDODunzps7gM6xk",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue"
}
] | open | false | 2 | 2023-01-28T13:12:31Z | 2023-02-05T18:09:54Z | MEMBER | The idea would be to allow this :
```python
ds.to_parquet("my_dataset/ds.parquet")
reloaded = load_dataset("my_dataset")
assert ds.features == reloaded.features
```
And it should also work with Image and Audio types (right now they're reloaded as a dict type)
This can be implemented by storing and reading the feature types in the parquet metadata, as we do for arrow files. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5482/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5482/timeline | null | {
"diff_url": null,
"html_url": null,
"merged_at": null,
"patch_url": null,
"url": null
} | null | null | {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
} | [] | {
"closed_at": null,
"closed_issues": null,
"created_at": null,
"creator": {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
},
"description": null,
"due_on": null,
"html_url": null,
"id": null,
"labels_url": null,
"node_id": null,
"number": null,
"open_issues": null,
"state": null,
"title": null,
"updated_at": null,
"url": null
} | false |
https://api.github.com/repos/huggingface/datasets/issues/5481 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5481/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5481/comments | https://api.github.com/repos/huggingface/datasets/issues/5481/events | https://github.com/huggingface/datasets/issues/5481 | 1,560,468,195 | I_kwDODunzps5dAtrj | 5,481 | Load a cached dataset as iterable | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "BDE59C",
"default": false,
"description": "Issues a bit more difficult than \"Good First\" issues",
"id": 3761482852,
"name": "good second issue",
"node_id": "LA_kwDODunzps7gM6xk",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue"
}
] | open | false | 12 | 2023-01-27T21:43:51Z | 2023-02-07T15:58:15Z | MEMBER | The idea would be to allow something like
```python
ds = load_dataset("c4", "en", as_iterable=True)
```
To be used to train models. It would load an IterableDataset from the cached Arrow files.
Cc @stas00
Edit : from the discussions we may load from cache when streaming=True | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 3,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5481/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5481/timeline | null | {
"diff_url": null,
"html_url": null,
"merged_at": null,
"patch_url": null,
"url": null
} | null | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/56002455?v=4",
"events_url": "https://api.github.com/users/hamid-vakilzadeh/events{/privacy}",
"followers_url": "https://api.github.com/users/hamid-vakilzadeh/followers",
"following_url": "https://api.github.com/users/hamid-vakilzadeh/following{/other_user}",
"gists_url": "https://api.github.com/users/hamid-vakilzadeh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hamid-vakilzadeh",
"id": 56002455,
"login": "hamid-vakilzadeh",
"node_id": "MDQ6VXNlcjU2MDAyNDU1",
"organizations_url": "https://api.github.com/users/hamid-vakilzadeh/orgs",
"received_events_url": "https://api.github.com/users/hamid-vakilzadeh/received_events",
"repos_url": "https://api.github.com/users/hamid-vakilzadeh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hamid-vakilzadeh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hamid-vakilzadeh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hamid-vakilzadeh"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/56002455?v=4",
"events_url": "https://api.github.com/users/hamid-vakilzadeh/events{/privacy}",
"followers_url": "https://api.github.com/users/hamid-vakilzadeh/followers",
"following_url": "https://api.github.com/users/hamid-vakilzadeh/following{/other_user}",
"gists_url": "https://api.github.com/users/hamid-vakilzadeh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hamid-vakilzadeh",
"id": 56002455,
"login": "hamid-vakilzadeh",
"node_id": "MDQ6VXNlcjU2MDAyNDU1",
"organizations_url": "https://api.github.com/users/hamid-vakilzadeh/orgs",
"received_events_url": "https://api.github.com/users/hamid-vakilzadeh/received_events",
"repos_url": "https://api.github.com/users/hamid-vakilzadeh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hamid-vakilzadeh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hamid-vakilzadeh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hamid-vakilzadeh"
}
] | {
"closed_at": null,
"closed_issues": null,
"created_at": null,
"creator": {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
},
"description": null,
"due_on": null,
"html_url": null,
"id": null,
"labels_url": null,
"node_id": null,
"number": null,
"open_issues": null,
"state": null,
"title": null,
"updated_at": null,
"url": null
} | false |
https://api.github.com/repos/huggingface/datasets/issues/5480 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5480/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5480/comments | https://api.github.com/repos/huggingface/datasets/issues/5480/events | https://github.com/huggingface/datasets/pull/5480 | 1,560,364,866 | PR_kwDODunzps5ItY2y | 5,480 | Select columns of Dataset or DatasetDict | {
"avatar_url": "https://avatars.githubusercontent.com/u/9336514?v=4",
"events_url": "https://api.github.com/users/daskol/events{/privacy}",
"followers_url": "https://api.github.com/users/daskol/followers",
"following_url": "https://api.github.com/users/daskol/following{/other_user}",
"gists_url": "https://api.github.com/users/daskol/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/daskol",
"id": 9336514,
"login": "daskol",
"node_id": "MDQ6VXNlcjkzMzY1MTQ=",
"organizations_url": "https://api.github.com/users/daskol/orgs",
"received_events_url": "https://api.github.com/users/daskol/received_events",
"repos_url": "https://api.github.com/users/daskol/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/daskol/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daskol/subscriptions",
"type": "User",
"url": "https://api.github.com/users/daskol"
} | [] | open | false | 1 | 2023-01-27T20:06:16Z | 2023-02-08T19:12:22Z | NONE | Close #5474 and #5468. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5480/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5480/timeline | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5480.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5480",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5480.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5480"
} | null | null | {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
} | [] | {
"closed_at": null,
"closed_issues": null,
"created_at": null,
"creator": {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
},
"description": null,
"due_on": null,
"html_url": null,
"id": null,
"labels_url": null,
"node_id": null,
"number": null,
"open_issues": null,
"state": null,
"title": null,
"updated_at": null,
"url": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5479 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5479/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5479/comments | https://api.github.com/repos/huggingface/datasets/issues/5479/events | https://github.com/huggingface/datasets/issues/5479 | 1,560,357,590 | I_kwDODunzps5dASrW | 5,479 | audiofolder works on local env, but creates empty dataset in a remote one, what dependencies could I be missing/outdated | {
"avatar_url": "https://avatars.githubusercontent.com/u/107211437?v=4",
"events_url": "https://api.github.com/users/jcho19/events{/privacy}",
"followers_url": "https://api.github.com/users/jcho19/followers",
"following_url": "https://api.github.com/users/jcho19/following{/other_user}",
"gists_url": "https://api.github.com/users/jcho19/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jcho19",
"id": 107211437,
"login": "jcho19",
"node_id": "U_kgDOBmPqrQ",
"organizations_url": "https://api.github.com/users/jcho19/orgs",
"received_events_url": "https://api.github.com/users/jcho19/received_events",
"repos_url": "https://api.github.com/users/jcho19/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jcho19/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jcho19/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jcho19"
} | [] | closed | false | 0 | 2023-01-27T20:01:22Z | 2023-01-29T05:23:14Z | NONE | ### Describe the bug
I'm using a custom audio dataset (400+ audio files) in the correct format for audiofolder. Although loading the dataset with audiofolder works in one local setup, it doesn't in a remote one (it just creates an empty dataset). I have both ffmpeg and libndfile installed on both computers, what could be missing/need to be updated in the one that doesn't work? On the remote env, libsndfile is 1.0.28 and ffmpeg is 4.2.1.
from datasets import load_dataset
ds = load_dataset("audiofolder", data_dir="...")
Here is the output (should be generating 400+ rows):
Downloading and preparing dataset audiofolder/default to ...
Downloading data files: 0%| | 0/2 [00:00<?, ?it/s]
Downloading data files: 0it [00:00, ?it/s]
Extracting data files: 0it [00:00, ?it/s]
Generating train split: 0 examples [00:00, ? examples/s]
Dataset audiofolder downloaded and prepared to ... Subsequent calls will reuse this data.
0%| | 0/1 [00:00<?, ?it/s]
DatasetDict({
train: Dataset({
features: ['audio', 'transcription'],
num_rows: 1
})
})
Here is my pip environment in the one that doesn't work (uses torch 1.11.a0 from shared env):
Package Version
------------------- -------------------
aiofiles 22.1.0
aiohttp 3.8.3
aiosignal 1.3.1
altair 4.2.1
anyio 3.6.2
appdirs 1.4.4
argcomplete 2.0.0
argon2-cffi 20.1.0
astunparse 1.6.3
async-timeout 4.0.2
attrs 21.2.0
audioread 3.0.0
backcall 0.2.0
bleach 4.0.0
certifi 2021.10.8
cffi 1.14.6
charset-normalizer 2.0.12
click 8.1.3
contourpy 1.0.7
cycler 0.11.0
datasets 2.9.0
debugpy 1.4.1
decorator 5.0.9
defusedxml 0.7.1
dill 0.3.6
distlib 0.3.4
entrypoints 0.3
evaluate 0.4.0
expecttest 0.1.3
fastapi 0.89.1
ffmpy 0.3.0
filelock 3.6.0
fonttools 4.38.0
frozenlist 1.3.3
fsspec 2023.1.0
future 0.18.2
gradio 3.16.2
h11 0.14.0
httpcore 0.16.3
httpx 0.23.3
huggingface-hub 0.12.0
idna 3.3
ipykernel 6.2.0
ipython 7.26.0
ipython-genutils 0.2.0
ipywidgets 7.6.3
jedi 0.18.0
Jinja2 3.0.1
jiwer 2.5.1
joblib 1.2.0
jsonschema 3.2.0
jupyter 1.0.0
jupyter-client 6.1.12
jupyter-console 6.4.0
jupyter-core 4.7.1
jupyterlab-pygments 0.1.2
jupyterlab-widgets 1.0.0
kiwisolver 1.4.4
Levenshtein 0.20.2
librosa 0.9.2
linkify-it-py 1.0.3
llvmlite 0.39.1
markdown-it-py 2.1.0
MarkupSafe 2.0.1
matplotlib 3.6.3
matplotlib-inline 0.1.2
mdit-py-plugins 0.3.3
mdurl 0.1.2
mistune 0.8.4
multidict 6.0.4
multiprocess 0.70.14
nbclient 0.5.4
nbconvert 6.1.0
nbformat 5.1.3
nest-asyncio 1.5.1
notebook 6.4.3
numba 0.56.4
numpy 1.20.3
orjson 3.8.5
packaging 21.0
pandas 1.5.3
pandocfilters 1.4.3
parso 0.8.2
pexpect 4.8.0
pickleshare 0.7.5
Pillow 9.4.0
pip 22.3.1
pipx 1.1.0
platformdirs 2.5.2
pooch 1.6.0
prometheus-client 0.11.0
prompt-toolkit 3.0.19
psutil 5.9.0
ptyprocess 0.7.0
pyarrow 10.0.1
pycparser 2.20
pycryptodome 3.16.0
pydantic 1.10.4
pydub 0.25.1
Pygments 2.10.0
pyparsing 2.4.7
pyrsistent 0.18.0
python-dateutil 2.8.2
python-multipart 0.0.5
pytz 2022.7.1
PyYAML 6.0
pyzmq 22.2.1
qtconsole 5.1.1
QtPy 1.10.0
rapidfuzz 2.13.7
regex 2022.10.31
requests 2.27.1
resampy 0.4.2
responses 0.18.0
rfc3986 1.5.0
scikit-learn 1.2.1
scipy 1.6.3
Send2Trash 1.8.0
setuptools 65.5.1
shiboken6 6.3.1
shiboken6-generator 6.3.1
six 1.16.0
sniffio 1.3.0
soundfile 0.11.0
starlette 0.22.0
terminado 0.11.0
testpath 0.5.0
threadpoolctl 3.1.0
tokenizers 0.13.2
toolz 0.12.0
torch 1.11.0a0+gitunknown
tornado 6.1
tqdm 4.64.1
traitlets 5.0.5
transformers 4.27.0.dev0
types-dataclasses 0.6.4
typing_extensions 4.1.1
uc-micro-py 1.0.1
urllib3 1.26.9
userpath 1.8.0
uvicorn 0.20.0
virtualenv 20.14.1
wcwidth 0.2.5
webencodings 0.5.1
websockets 10.4
wheel 0.37.1
widgetsnbextension 3.5.1
xxhash 3.2.0
yarl 1.8.2
### Steps to reproduce the bug
Create a pip environment with the packages listed above (make sure ffmpeg and libsndfile is installed with same versions listed above).
Create a custom audio dataset and load it in with load_dataset("audiofolder", ...)
### Expected behavior
load_dataset should create a dataset with 400+ rows.
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-3.10.0-1160.80.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.0
- PyArrow version: 10.0.1
- Pandas version: 1.5.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5479/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5479/timeline | null | {
"diff_url": null,
"html_url": null,
"merged_at": null,
"patch_url": null,
"url": null
} | 2023-01-29T05:23:14Z | completed | {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
} | [] | {
"closed_at": null,
"closed_issues": null,
"created_at": null,
"creator": {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
},
"description": null,
"due_on": null,
"html_url": null,
"id": null,
"labels_url": null,
"node_id": null,
"number": null,
"open_issues": null,
"state": null,
"title": null,
"updated_at": null,
"url": null
} | false |
https://api.github.com/repos/huggingface/datasets/issues/5478 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5478/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5478/comments | https://api.github.com/repos/huggingface/datasets/issues/5478/events | https://github.com/huggingface/datasets/pull/5478 | 1,560,357,583 | PR_kwDODunzps5ItXQG | 5,478 | Tip for recomputing metadata | {
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stevhliu",
"id": 59462357,
"login": "stevhliu",
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stevhliu"
} | [] | closed | false | 2 | 2023-01-27T20:01:22Z | 2023-01-30T19:22:21Z | MEMBER | From this [feedback](https://discuss.huggingface.co/t/nonmatchingsplitssizeserror/30033) on the forum, thought I'd include a tip for recomputing the metadata numbers if it is your own dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5478/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5478/timeline | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5478.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5478",
"merged_at": "2023-01-30T19:15:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5478.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5478"
} | 2023-01-30T19:15:26Z | null | {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
} | [] | {
"closed_at": null,
"closed_issues": null,
"created_at": null,
"creator": {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
},
"description": null,
"due_on": null,
"html_url": null,
"id": null,
"labels_url": null,
"node_id": null,
"number": null,
"open_issues": null,
"state": null,
"title": null,
"updated_at": null,
"url": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5477 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5477/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5477/comments | https://api.github.com/repos/huggingface/datasets/issues/5477/events | https://github.com/huggingface/datasets/issues/5477 | 1,559,909,892 | I_kwDODunzps5c-lYE | 5,477 | Unpin sqlalchemy once issue is fixed | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | open | false | 0 | 2023-01-27T15:01:55Z | 2023-01-27T15:01:55Z | MEMBER | Once the source issue is fixed:
- pandas-dev/pandas#51015
we should revert the pin introduced in:
- #5476 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5477/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5477/timeline | null | {
"diff_url": null,
"html_url": null,
"merged_at": null,
"patch_url": null,
"url": null
} | null | null | {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
} | [] | {
"closed_at": null,
"closed_issues": null,
"created_at": null,
"creator": {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
},
"description": null,
"due_on": null,
"html_url": null,
"id": null,
"labels_url": null,
"node_id": null,
"number": null,
"open_issues": null,
"state": null,
"title": null,
"updated_at": null,
"url": null
} | false |
https://api.github.com/repos/huggingface/datasets/issues/5476 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5476/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5476/comments | https://api.github.com/repos/huggingface/datasets/issues/5476/events | https://github.com/huggingface/datasets/pull/5476 | 1,559,594,684 | PR_kwDODunzps5IqwC_ | 5,476 | Pin sqlalchemy | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | 3 | 2023-01-27T11:26:38Z | 2023-01-27T12:06:51Z | MEMBER | since sqlalchemy update to 2.0.0 the CI started to fail: https://github.com/huggingface/datasets/actions/runs/4023742457/jobs/6914976514
the error comes from pandas: https://github.com/pandas-dev/pandas/issues/51015 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5476/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5476/timeline | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5476.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5476",
"merged_at": "2023-01-27T11:57:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5476.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5476"
} | 2023-01-27T11:57:48Z | null | {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
} | [] | {
"closed_at": null,
"closed_issues": null,
"created_at": null,
"creator": {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
},
"description": null,
"due_on": null,
"html_url": null,
"id": null,
"labels_url": null,
"node_id": null,
"number": null,
"open_issues": null,
"state": null,
"title": null,
"updated_at": null,
"url": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5475 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5475/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5475/comments | https://api.github.com/repos/huggingface/datasets/issues/5475/events | https://github.com/huggingface/datasets/issues/5475 | 1,559,030,149 | I_kwDODunzps5c7OmF | 5,475 | Dataset scan time is much slower than using native arrow | {
"avatar_url": "https://avatars.githubusercontent.com/u/121845112?v=4",
"events_url": "https://api.github.com/users/jonny-cyberhaven/events{/privacy}",
"followers_url": "https://api.github.com/users/jonny-cyberhaven/followers",
"following_url": "https://api.github.com/users/jonny-cyberhaven/following{/other_user}",
"gists_url": "https://api.github.com/users/jonny-cyberhaven/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jonny-cyberhaven",
"id": 121845112,
"login": "jonny-cyberhaven",
"node_id": "U_kgDOB0M1eA",
"organizations_url": "https://api.github.com/users/jonny-cyberhaven/orgs",
"received_events_url": "https://api.github.com/users/jonny-cyberhaven/received_events",
"repos_url": "https://api.github.com/users/jonny-cyberhaven/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jonny-cyberhaven/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonny-cyberhaven/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jonny-cyberhaven"
} | [] | closed | false | 3 | 2023-01-27T01:32:25Z | 2023-01-30T16:17:11Z | CONTRIBUTOR | ### Describe the bug
I'm basically running the same scanning experiment from the tutorials https://huggingface.co/course/chapter5/4?fw=pt except now I'm comparing to a native pyarrow version.
I'm finding that the native pyarrow approach is much faster (2 orders of magnitude). Is there something I'm missing that explains this phenomenon?
### Steps to reproduce the bug
https://colab.research.google.com/drive/11EtHDaGAf1DKCpvYnAPJUW-LFfAcDzHY?usp=sharing
### Expected behavior
I expect scan times to be on par with using pyarrow directly.
### Environment info
standard colab environment | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5475/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5475/timeline | null | {
"diff_url": null,
"html_url": null,
"merged_at": null,
"patch_url": null,
"url": null
} | 2023-01-30T16:17:11Z | completed | {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
} | [] | {
"closed_at": null,
"closed_issues": null,
"created_at": null,
"creator": {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
},
"description": null,
"due_on": null,
"html_url": null,
"id": null,
"labels_url": null,
"node_id": null,
"number": null,
"open_issues": null,
"state": null,
"title": null,
"updated_at": null,
"url": null
} | false |
https://api.github.com/repos/huggingface/datasets/issues/5474 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5474/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5474/comments | https://api.github.com/repos/huggingface/datasets/issues/5474/events | https://github.com/huggingface/datasets/issues/5474 | 1,558,827,155 | I_kwDODunzps5c6dCT | 5,474 | Column project operation on `datasets.Dataset` | {
"avatar_url": "https://avatars.githubusercontent.com/u/9336514?v=4",
"events_url": "https://api.github.com/users/daskol/events{/privacy}",
"followers_url": "https://api.github.com/users/daskol/followers",
"following_url": "https://api.github.com/users/daskol/following{/other_user}",
"gists_url": "https://api.github.com/users/daskol/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/daskol",
"id": 9336514,
"login": "daskol",
"node_id": "MDQ6VXNlcjkzMzY1MTQ=",
"organizations_url": "https://api.github.com/users/daskol/orgs",
"received_events_url": "https://api.github.com/users/daskol/received_events",
"repos_url": "https://api.github.com/users/daskol/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/daskol/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daskol/subscriptions",
"type": "User",
"url": "https://api.github.com/users/daskol"
} | [
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
},
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | 1 | 2023-01-26T21:47:53Z | 2023-02-01T16:44:09Z | NONE | ### Feature request
There is no operation to select a subset of columns of original dataset. Expected API follows.
```python
a = Dataset.from_dict({
'int': [0, 1, 2]
'char': ['a', 'b', 'c'],
'none': [None] * 3,
})
b = a.project('int', 'char') # usually, .select()
print(a.column_names) # stdout: ['int', 'char', 'none']
print(b.column_names) # stdout: ['int', 'char']
```
Method project can easily accept not only column names (as a `str)` but univariant function applied to corresponding column as an example. Or keyword arguments can be used in order to rename columns in advance (see `pandas`, `pyspark`, `pyarrow`, and SQL)..
### Motivation
Projection is a typical operation in every data processing library. And it is a basic block of a well-known data manipulation language like SQL. Without this operation `datasets.Dataset` interface is not complete.
### Your contribution
Not sure. Some of my PRs are still open and some do not have any discussions. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5474/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5474/timeline | null | {
"diff_url": null,
"html_url": null,
"merged_at": null,
"patch_url": null,
"url": null
} | null | null | {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
} | [] | {
"closed_at": null,
"closed_issues": null,
"created_at": null,
"creator": {
"avatar_url": null,
"events_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"gravatar_id": null,
"html_url": null,
"id": null,
"login": null,
"node_id": null,
"organizations_url": null,
"received_events_url": null,
"repos_url": null,
"site_admin": null,
"starred_url": null,
"subscriptions_url": null,
"type": null,
"url": null
},
"description": null,
"due_on": null,
"html_url": null,
"id": null,
"labels_url": null,
"node_id": null,
"number": null,
"open_issues": null,
"state": null,
"title": null,
"updated_at": null,
"url": null
} | false |
End of preview. Expand
in Dataset Viewer.
- Downloads last month
- 1