url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.11B
| node_id
stringlengths 18
32
| number
int64 1
3.59k
| title
stringlengths 1
276
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
int64 1,587B
1,643B
| updated_at
int64 1,587B
1,643B
| closed_at
int64 1,587B
1,643B
⌀ | author_association
stringclasses 3
values | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/3591 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3591/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3591/comments | https://api.github.com/repos/huggingface/datasets/issues/3591/events | https://github.com/huggingface/datasets/pull/3591 | 1,106,928,613 | PR_kwDODunzps4xNDoB | 3,591 | Add support for time, date, duration, and decimal dtypes | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,642,513,565,000 | 1,642,513,565,000 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3591",
"html_url": "https://github.com/huggingface/datasets/pull/3591",
"diff_url": "https://github.com/huggingface/datasets/pull/3591.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3591.patch",
"merged_at": null
} | Add support for the pyarrow time (maps to `datetime.time` in python), date (maps to `datetime.time` in python), duration (maps to `datetime.timedelta` in python), and decimal (maps to `decimal.decimal` in python) dtypes. This should be helpful when writing scripts for time-series datasets. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3591/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3591/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3590 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3590/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3590/comments | https://api.github.com/repos/huggingface/datasets/issues/3590/events | https://github.com/huggingface/datasets/pull/3590 | 1,106,784,860 | PR_kwDODunzps4xMlGg | 3,590 | Update README.md | {
"login": "borgr",
"id": 6416600,
"node_id": "MDQ6VXNlcjY0MTY2MDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6416600?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/borgr",
"html_url": "https://github.com/borgr",
"followers_url": "https://api.github.com/users/borgr/followers",
"following_url": "https://api.github.com/users/borgr/following{/other_user}",
"gists_url": "https://api.github.com/users/borgr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/borgr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borgr/subscriptions",
"organizations_url": "https://api.github.com/users/borgr/orgs",
"repos_url": "https://api.github.com/users/borgr/repos",
"events_url": "https://api.github.com/users/borgr/events{/privacy}",
"received_events_url": "https://api.github.com/users/borgr/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,642,504,973,000 | 1,642,504,973,000 | null | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3590",
"html_url": "https://github.com/huggingface/datasets/pull/3590",
"diff_url": "https://github.com/huggingface/datasets/pull/3590.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3590.patch",
"merged_at": null
} | Update license and little things concerning ANLI | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3590/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3590/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3589 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3589/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3589/comments | https://api.github.com/repos/huggingface/datasets/issues/3589/events | https://github.com/huggingface/datasets/pull/3589 | 1,106,766,114 | PR_kwDODunzps4xMhGp | 3,589 | Pin torchmetrics to fix the COMET test | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,642,503,829,000 | 1,642,503,896,000 | 1,642,503,895,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3589",
"html_url": "https://github.com/huggingface/datasets/pull/3589",
"diff_url": "https://github.com/huggingface/datasets/pull/3589.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3589.patch",
"merged_at": 1642503895000
} | Torchmetrics 0.7.0 got released and has issues with `transformers` (see https://github.com/PyTorchLightning/metrics/issues/770)
I'm pinning it to 0.6.0 in the CI, since 0.7.0 makes the COMET metric test fail. COMET requires torchmetrics==0.6.0 anyway. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3589/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3589/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3588 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3588/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3588/comments | https://api.github.com/repos/huggingface/datasets/issues/3588/events | https://github.com/huggingface/datasets/pull/3588 | 1,106,749,000 | PR_kwDODunzps4xMdiC | 3,588 | Update README.md | {
"login": "borgr",
"id": 6416600,
"node_id": "MDQ6VXNlcjY0MTY2MDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6416600?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/borgr",
"html_url": "https://github.com/borgr",
"followers_url": "https://api.github.com/users/borgr/followers",
"following_url": "https://api.github.com/users/borgr/following{/other_user}",
"gists_url": "https://api.github.com/users/borgr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/borgr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borgr/subscriptions",
"organizations_url": "https://api.github.com/users/borgr/orgs",
"repos_url": "https://api.github.com/users/borgr/repos",
"events_url": "https://api.github.com/users/borgr/events{/privacy}",
"received_events_url": "https://api.github.com/users/borgr/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,642,502,775,000 | 1,642,502,775,000 | null | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3588",
"html_url": "https://github.com/huggingface/datasets/pull/3588",
"diff_url": "https://github.com/huggingface/datasets/pull/3588.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3588.patch",
"merged_at": null
} | Adding information from the git repo and paper that were missing | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3588/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3588/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3587 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3587/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3587/comments | https://api.github.com/repos/huggingface/datasets/issues/3587/events | https://github.com/huggingface/datasets/issues/3587 | 1,106,719,182 | I_kwDODunzps5B9zHO | 3,587 | No module named 'fsspec.archive' | {
"login": "shuuchen",
"id": 13246825,
"node_id": "MDQ6VXNlcjEzMjQ2ODI1",
"avatar_url": "https://avatars.githubusercontent.com/u/13246825?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shuuchen",
"html_url": "https://github.com/shuuchen",
"followers_url": "https://api.github.com/users/shuuchen/followers",
"following_url": "https://api.github.com/users/shuuchen/following{/other_user}",
"gists_url": "https://api.github.com/users/shuuchen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shuuchen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shuuchen/subscriptions",
"organizations_url": "https://api.github.com/users/shuuchen/orgs",
"repos_url": "https://api.github.com/users/shuuchen/repos",
"events_url": "https://api.github.com/users/shuuchen/events{/privacy}",
"received_events_url": "https://api.github.com/users/shuuchen/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [] | 1,642,501,021,000 | 1,642,501,990,000 | 1,642,501,990,000 | NONE | null | null | null | ## Describe the bug
Cannot import datasets after installation.
## Steps to reproduce the bug
```shell
$ python
Python 3.9.7 (default, Sep 16 2021, 13:09:58)
[GCC 7.5.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import datasets
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/__init__.py", line 34, in <module>
from .arrow_dataset import Dataset, concatenate_datasets
File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 61, in <module>
from .arrow_writer import ArrowWriter, OptimizedTypedSequence
File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/arrow_writer.py", line 28, in <module>
from .features import (
File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/features/__init__.py", line 2, in <module>
from .audio import Audio
File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/features/audio.py", line 7, in <module>
from ..utils.streaming_download_manager import xopen
File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/utils/streaming_download_manager.py", line 18, in <module>
from ..filesystems import COMPRESSION_FILESYSTEMS
File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/filesystems/__init__.py", line 6, in <module>
from . import compression
File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/filesystems/compression.py", line 5, in <module>
from fsspec.archive import AbstractArchiveFileSystem
ModuleNotFoundError: No module named 'fsspec.archive'
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3587/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3587/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3586 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3586/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3586/comments | https://api.github.com/repos/huggingface/datasets/issues/3586/events | https://github.com/huggingface/datasets/issues/3586 | 1,106,455,672 | I_kwDODunzps5B8yx4 | 3,586 | Revisit `enable/disable_` toggle function prefix | {
"login": "jaketae",
"id": 25360440,
"node_id": "MDQ6VXNlcjI1MzYwNDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/25360440?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jaketae",
"html_url": "https://github.com/jaketae",
"followers_url": "https://api.github.com/users/jaketae/followers",
"following_url": "https://api.github.com/users/jaketae/following{/other_user}",
"gists_url": "https://api.github.com/users/jaketae/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jaketae/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jaketae/subscriptions",
"organizations_url": "https://api.github.com/users/jaketae/orgs",
"repos_url": "https://api.github.com/users/jaketae/repos",
"events_url": "https://api.github.com/users/jaketae/events{/privacy}",
"received_events_url": "https://api.github.com/users/jaketae/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 1,642,478,995,000 | 1,642,478,995,000 | null | CONTRIBUTOR | null | null | null | As discussed in https://github.com/huggingface/transformers/pull/15167, we should revisit the `enable/disable_` toggle function prefix, potentially in favor of `set_enabled_`. Concretely, this translates to
- De-deprecating `disable_progress_bar()`
- Adding `enable_progress_bar()`
- On the caching side, adding `enable_caching` and `disable_caching`
Additional decisions have to be made with regards to the existing `set_enabled_X` functions; that is, whether to keep them as is or deprecate them in favor of the aforementioned functions.
cc @mariosasko @lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3586/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3586/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3585 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3585/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3585/comments | https://api.github.com/repos/huggingface/datasets/issues/3585/events | https://github.com/huggingface/datasets/issues/3585 | 1,105,821,470 | I_kwDODunzps5B6X8e | 3,585 | Datasets streaming + map doesn't work for `Audio` | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
}
] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"This seems related to https://github.com/huggingface/datasets/issues/3505."
] | 1,642,424,142,000 | 1,642,424,757,000 | null | MEMBER | null | null | null | ## Describe the bug
When using audio datasets in streaming mode, applying a `map(...)` before iterating leads to an error as the key `array` does not exist anymore.
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset("common_voice", "en", streaming=True, split="train")
def map_fn(batch):
print("audio keys", batch["audio"].keys())
batch["audio"] = batch["audio"]["array"][:100]
return batch
ds = ds.map(map_fn)
sample = next(iter(ds))
```
I think the audio is somehow decoded before `.map(...)` is actually called.
## Expected results
IMO, the above code snippet should work.
## Actual results
```bash
audio keys dict_keys(['path', 'bytes'])
Traceback (most recent call last):
File "./run_audio.py", line 15, in <module>
sample = next(iter(ds))
File "/home/patrick/python_bin/datasets/iterable_dataset.py", line 341, in __iter__
for key, example in self._iter():
File "/home/patrick/python_bin/datasets/iterable_dataset.py", line 338, in _iter
yield from ex_iterable
File "/home/patrick/python_bin/datasets/iterable_dataset.py", line 192, in __iter__
yield key, self.function(example)
File "./run_audio.py", line 9, in map_fn
batch["input"] = batch["audio"]["array"][:100]
KeyError: 'array'
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.17.1.dev0
- Platform: Linux-5.3.0-64-generic-x86_64-with-glibc2.17
- Python version: 3.8.12
- PyArrow version: 6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3585/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3585/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3584 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3584/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3584/comments | https://api.github.com/repos/huggingface/datasets/issues/3584/events | https://github.com/huggingface/datasets/issues/3584 | 1,105,231,768 | I_kwDODunzps5B4H-Y | 3,584 | https://huggingface.co/datasets/huggingface/transformers-metadata | {
"login": "ecankirkic",
"id": 37082592,
"node_id": "MDQ6VXNlcjM3MDgyNTky",
"avatar_url": "https://avatars.githubusercontent.com/u/37082592?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ecankirkic",
"html_url": "https://github.com/ecankirkic",
"followers_url": "https://api.github.com/users/ecankirkic/followers",
"following_url": "https://api.github.com/users/ecankirkic/following{/other_user}",
"gists_url": "https://api.github.com/users/ecankirkic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ecankirkic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ecankirkic/subscriptions",
"organizations_url": "https://api.github.com/users/ecankirkic/orgs",
"repos_url": "https://api.github.com/users/ecankirkic/repos",
"events_url": "https://api.github.com/users/ecankirkic/events{/privacy}",
"received_events_url": "https://api.github.com/users/ecankirkic/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | open | false | null | [] | null | [] | 1,642,378,694,000 | 1,642,411,314,000 | null | NONE | null | null | null | ## Dataset viewer issue for '*name of the dataset*'
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3584/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3584/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3583 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3583/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3583/comments | https://api.github.com/repos/huggingface/datasets/issues/3583/events | https://github.com/huggingface/datasets/issues/3583 | 1,105,195,144 | I_kwDODunzps5B3_CI | 3,583 | Add The Medical Segmentation Decathlon Dataset | {
"login": "omarespejel",
"id": 4755430,
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omarespejel",
"html_url": "https://github.com/omarespejel",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | open | false | null | [] | null | [] | 1,642,369,345,000 | 1,642,369,345,000 | null | NONE | null | null | null | ## Adding a Dataset
- **Name:** *The Medical Segmentation Decathlon Dataset*
- **Description:** The underlying data set was designed to explore the axis of difficulties typically encountered when dealing with medical images, such as small data sets, unbalanced labels, multi-site data, and small objects.
- **Paper:** [link to the dataset paper if available](https://arxiv.org/abs/2106.05735)
- **Data:** http://medicaldecathlon.com/
- **Motivation:** Hugging Face seeks to democratize ML for society. One of the growing niches within ML is the ML + Medicine community. Key data sets will help increase the supply of HF resources for starting an initial community.
(cc @osanseviero @abidlabs )
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3583/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3583/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3582 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3582/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3582/comments | https://api.github.com/repos/huggingface/datasets/issues/3582/events | https://github.com/huggingface/datasets/issues/3582 | 1,104,877,303 | I_kwDODunzps5B2xb3 | 3,582 | conll 2003 dataset source url is no longer valid | {
"login": "rcanand",
"id": 303900,
"node_id": "MDQ6VXNlcjMwMzkwMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/303900?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rcanand",
"html_url": "https://github.com/rcanand",
"followers_url": "https://api.github.com/users/rcanand/followers",
"following_url": "https://api.github.com/users/rcanand/following{/other_user}",
"gists_url": "https://api.github.com/users/rcanand/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rcanand/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rcanand/subscriptions",
"organizations_url": "https://api.github.com/users/rcanand/orgs",
"repos_url": "https://api.github.com/users/rcanand/repos",
"events_url": "https://api.github.com/users/rcanand/events{/privacy}",
"received_events_url": "https://api.github.com/users/rcanand/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | open | false | null | [] | null | [
"I came to open the same issue."
] | 1,642,287,857,000 | 1,642,425,282,000 | null | NONE | null | null | null | ## Describe the bug
Loading `conll2003` dataset fails because it was removed (just yesterday 1/14/2022) from the location it is looking for.
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset("conll2003")
```
## Expected results
The dataset should load.
## Actual results
It is looking for the dataset at `https://github.com/davidsbatista/NER-datasets/raw/master/CONLL2003/train.txt` but it was removed from there yesterday (see [commit](https://github.com/davidsbatista/NER-datasets/commit/9d8f45cc7331569af8eb3422bbe1c97cbebd5690) that removed the file and related [issue](https://github.com/davidsbatista/NER-datasets/issues/8)).
- We should replace this with an alternate valid location.
- this is being referenced in the huggingface course chapter 7 [colab notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/chapter7/section2_pt.ipynb), which is also broken.
```python
FileNotFoundError Traceback (most recent call last)
<ipython-input-4-27c956bec93c> in <module>()
1 from datasets import load_dataset
2
----> 3 raw_datasets = load_dataset("conll2003")
11 frames
/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token, ignore_url_params)
610 )
611 elif response is not None and response.status_code == 404:
--> 612 raise FileNotFoundError(f"Couldn't find file at {url}")
613 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}")
614 if head_error is not None:
FileNotFoundError: Couldn't find file at https://github.com/davidsbatista/NER-datasets/raw/master/CONLL2003/train.txt
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform:
- Python version:
- PyArrow version:
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3582/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 1,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3582/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3581 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3581/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3581/comments | https://api.github.com/repos/huggingface/datasets/issues/3581/events | https://github.com/huggingface/datasets/issues/3581 | 1,104,857,822 | I_kwDODunzps5B2sre | 3,581 | Unable to create a dataset from a parquet file in S3 | {
"login": "regCode",
"id": 18012903,
"node_id": "MDQ6VXNlcjE4MDEyOTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/18012903?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/regCode",
"html_url": "https://github.com/regCode",
"followers_url": "https://api.github.com/users/regCode/followers",
"following_url": "https://api.github.com/users/regCode/following{/other_user}",
"gists_url": "https://api.github.com/users/regCode/gists{/gist_id}",
"starred_url": "https://api.github.com/users/regCode/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/regCode/subscriptions",
"organizations_url": "https://api.github.com/users/regCode/orgs",
"repos_url": "https://api.github.com/users/regCode/repos",
"events_url": "https://api.github.com/users/regCode/events{/privacy}",
"received_events_url": "https://api.github.com/users/regCode/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [] | 1,642,282,456,000 | 1,642,282,456,000 | null | NONE | null | null | null | ## Describe the bug
Trying to create a dataset from a parquet file in S3.
## Steps to reproduce the bug
```python
import s3fs
from datasets import Dataset
s3 = s3fs.S3FileSystem(anon=False)
with s3.open(PATH_LTR_TOY_CLEAN_DATASET, 'rb') as s3file:
dataset = Dataset.from_parquet(s3file)
```
## Expected results
A new Dataset object
## Actual results
```AttributeError: 'S3File' object has no attribute 'decode'```
```
AttributeError Traceback (most recent call last)
<command-2452877612515691> in <module>
5
6 with s3.open(PATH_LTR_TOY_CLEAN_DATASET, 'rb') as s3file:
----> 7 dataset = Dataset.from_parquet(s3file)
/databricks/python/lib/python3.8/site-packages/datasets/arrow_dataset.py in from_parquet(path_or_paths, split, features, cache_dir, keep_in_memory, columns, **kwargs)
907 from .io.parquet import ParquetDatasetReader
908
--> 909 return ParquetDatasetReader(
910 path_or_paths,
911 split=split,
/databricks/python/lib/python3.8/site-packages/datasets/io/parquet.py in __init__(self, path_or_paths, split, features, cache_dir, keep_in_memory, **kwargs)
28 path_or_paths = path_or_paths if isinstance(path_or_paths, dict) else {self.split: path_or_paths}
29 hash = _PACKAGED_DATASETS_MODULES["parquet"][1]
---> 30 self.builder = Parquet(
31 cache_dir=cache_dir,
32 data_files=path_or_paths,
/databricks/python/lib/python3.8/site-packages/datasets/builder.py in __init__(self, cache_dir, name, hash, base_path, info, features, use_auth_token, namespace, data_files, data_dir, **config_kwargs)
246
247 if data_files is not None and not isinstance(data_files, DataFilesDict):
--> 248 data_files = DataFilesDict.from_local_or_remote(
249 sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token
250 )
/databricks/python/lib/python3.8/site-packages/datasets/data_files.py in from_local_or_remote(cls, patterns, base_path, allowed_extensions, use_auth_token)
576 for key, patterns_for_key in patterns.items():
577 out[key] = (
--> 578 DataFilesList.from_local_or_remote(
579 patterns_for_key,
580 base_path=base_path,
/databricks/python/lib/python3.8/site-packages/datasets/data_files.py in from_local_or_remote(cls, patterns, base_path, allowed_extensions, use_auth_token)
544 ) -> "DataFilesList":
545 base_path = base_path if base_path is not None else str(Path().resolve())
--> 546 data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)
547 origin_metadata = _get_origin_metadata_locally_or_by_urls(data_files, use_auth_token=use_auth_token)
548 return cls(data_files, origin_metadata)
/databricks/python/lib/python3.8/site-packages/datasets/data_files.py in resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)
191 data_files = []
192 for pattern in patterns:
--> 193 if is_remote_url(pattern):
194 data_files.append(Url(pattern))
195 else:
/databricks/python/lib/python3.8/site-packages/datasets/utils/file_utils.py in is_remote_url(url_or_filename)
115
116 def is_remote_url(url_or_filename: str) -> bool:
--> 117 parsed = urlparse(url_or_filename)
118 return parsed.scheme in ("http", "https", "s3", "gs", "hdfs", "ftp")
119
/usr/lib/python3.8/urllib/parse.py in urlparse(url, scheme, allow_fragments)
370 Note that we don't break the components up in smaller bits
371 (e.g. netloc is a single string) and we don't expand % escapes."""
--> 372 url, scheme, _coerce_result = _coerce_args(url, scheme)
373 splitresult = urlsplit(url, scheme, allow_fragments)
374 scheme, netloc, url, query, fragment = splitresult
/usr/lib/python3.8/urllib/parse.py in _coerce_args(*args)
122 if str_input:
123 return args + (_noop,)
--> 124 return _decode_args(args) + (_encode_result,)
125
126 # Result objects are more helpful than simple tuples
/usr/lib/python3.8/urllib/parse.py in _decode_args(args, encoding, errors)
106 def _decode_args(args, encoding=_implicit_encoding,
107 errors=_implicit_errors):
--> 108 return tuple(x.decode(encoding, errors) if x else '' for x in args)
109
110 def _coerce_args(*args):
/usr/lib/python3.8/urllib/parse.py in <genexpr>(.0)
106 def _decode_args(args, encoding=_implicit_encoding,
107 errors=_implicit_errors):
--> 108 return tuple(x.decode(encoding, errors) if x else '' for x in args)
109
110 def _coerce_args(*args):
AttributeError: 'S3File' object has no attribute 'decode'
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.17.0
- Platform: Ubuntu 20.04.3 LTS
- Python version: 3.8.10
- PyArrow version: 6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3581/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3581/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3580 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3580/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3580/comments | https://api.github.com/repos/huggingface/datasets/issues/3580/events | https://github.com/huggingface/datasets/issues/3580 | 1,104,663,242 | I_kwDODunzps5B19LK | 3,580 | Bug in wiki bio load | {
"login": "tuhinjubcse",
"id": 3104771,
"node_id": "MDQ6VXNlcjMxMDQ3NzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3104771?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tuhinjubcse",
"html_url": "https://github.com/tuhinjubcse",
"followers_url": "https://api.github.com/users/tuhinjubcse/followers",
"following_url": "https://api.github.com/users/tuhinjubcse/following{/other_user}",
"gists_url": "https://api.github.com/users/tuhinjubcse/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tuhinjubcse/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tuhinjubcse/subscriptions",
"organizations_url": "https://api.github.com/users/tuhinjubcse/orgs",
"repos_url": "https://api.github.com/users/tuhinjubcse/repos",
"events_url": "https://api.github.com/users/tuhinjubcse/events{/privacy}",
"received_events_url": "https://api.github.com/users/tuhinjubcse/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | open | false | null | [] | null | [] | 1,642,241,073,000 | 1,642,425,303,000 | null | NONE | null | null | null |
wiki_bio is failing to load because of a failing drive link . Can someone fix this ?
![7E90023B-A3B1-4930-BA25-45CCCB4E1710](https://user-images.githubusercontent.com/3104771/149617870-5a32a2da-2c78-483b-bff6-d7534215a423.png)
![653C1C76-C725-4A04-A0D8-084373BA612F](https://user-images.githubusercontent.com/3104771/149617875-ef0e30b0-b76e-48cf-b3eb-93ba8e6e5465.png)
a | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3580/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3580/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3579 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3579/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3579/comments | https://api.github.com/repos/huggingface/datasets/issues/3579/events | https://github.com/huggingface/datasets/pull/3579 | 1,103,451,118 | PR_kwDODunzps4xBmY4 | 3,579 | Add Text2log Dataset | {
"login": "apergo-ai",
"id": 68908804,
"node_id": "MDQ6VXNlcjY4OTA4ODA0",
"avatar_url": "https://avatars.githubusercontent.com/u/68908804?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/apergo-ai",
"html_url": "https://github.com/apergo-ai",
"followers_url": "https://api.github.com/users/apergo-ai/followers",
"following_url": "https://api.github.com/users/apergo-ai/following{/other_user}",
"gists_url": "https://api.github.com/users/apergo-ai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/apergo-ai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/apergo-ai/subscriptions",
"organizations_url": "https://api.github.com/users/apergo-ai/orgs",
"repos_url": "https://api.github.com/users/apergo-ai/repos",
"events_url": "https://api.github.com/users/apergo-ai/events{/privacy}",
"received_events_url": "https://api.github.com/users/apergo-ai/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,642,157,101,000 | 1,642,157,101,000 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3579",
"html_url": "https://github.com/huggingface/datasets/pull/3579",
"diff_url": "https://github.com/huggingface/datasets/pull/3579.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3579.patch",
"merged_at": null
} | Adding the text2log dataset used for training FOL sentence translating models | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3579/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3579/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3578 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3578/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3578/comments | https://api.github.com/repos/huggingface/datasets/issues/3578/events | https://github.com/huggingface/datasets/issues/3578 | 1,103,403,287 | I_kwDODunzps5BxJkX | 3,578 | label information get lost after parquet serialization | {
"login": "Tudyx",
"id": 56633664,
"node_id": "MDQ6VXNlcjU2NjMzNjY0",
"avatar_url": "https://avatars.githubusercontent.com/u/56633664?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tudyx",
"html_url": "https://github.com/Tudyx",
"followers_url": "https://api.github.com/users/Tudyx/followers",
"following_url": "https://api.github.com/users/Tudyx/following{/other_user}",
"gists_url": "https://api.github.com/users/Tudyx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Tudyx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tudyx/subscriptions",
"organizations_url": "https://api.github.com/users/Tudyx/orgs",
"repos_url": "https://api.github.com/users/Tudyx/repos",
"events_url": "https://api.github.com/users/Tudyx/events{/privacy}",
"received_events_url": "https://api.github.com/users/Tudyx/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [] | 1,642,155,038,000 | 1,642,155,038,000 | null | NONE | null | null | null | ## Describe the bug
In *dataset_info.json* file, information about the label get lost after the dataset serialization.
## Steps to reproduce the bug
```python
from datasets import load_dataset
# normal save
dataset = load_dataset('glue', 'sst2', split='train')
dataset.save_to_disk("normal_save")
# save after parquet serialization
dataset.to_parquet("glue-sst2-train.parquet")
dataset = load_dataset("parquet", data_files='glue-sst2-train.parquet')
dataset.save_to_disk("save_after_parquet")
```
## Expected results
I expected to keep label information in *dataset_info.json* file even after parquet serialization
## Actual results
In the normal serialization i got
```json
"label": {
"num_classes": 2,
"names": [
"negative",
"positive"
],
"names_file": null,
"id": null,
"_type": "ClassLabel"
},
```
And after parquet serialization i got
```json
"label": {
"dtype": "int64",
"id": null,
"_type": "Value"
},
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.17.0
- Platform: ubuntu 20.04
- Python version: 3.8.10
- PyArrow version: 6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3578/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3578/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3577 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3577/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3577/comments | https://api.github.com/repos/huggingface/datasets/issues/3577/events | https://github.com/huggingface/datasets/issues/3577 | 1,102,598,241 | I_kwDODunzps5BuFBh | 3,577 | Add The Mexican Emotional Speech Database (MESD) | {
"login": "omarespejel",
"id": 4755430,
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omarespejel",
"html_url": "https://github.com/omarespejel",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | open | false | null | [] | null | [] | 1,642,117,776,000 | 1,642,117,776,000 | null | NONE | null | null | null | ## Adding a Dataset
- **Name:** *The Mexican Emotional Speech Database (MESD)*
- **Description:** *Contains 864 voice recordings with six different prosodies: anger, disgust, fear, happiness, neutral, and sadness. Furthermore, three voice categories are included: female adult, male adult, and child. *
- **Paper:** *[Paper](https://ieeexplore.ieee.org/abstract/document/9629934/authors#authors)*
- **Data:** *[link to the Github repository or current dataset location](https://data.mendeley.com/datasets/cy34mh68j9/3)*
- **Motivation:** *Would add Spanish speech data to the HF datasets :) *
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3577/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3577/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3576 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3576/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3576/comments | https://api.github.com/repos/huggingface/datasets/issues/3576/events | https://github.com/huggingface/datasets/pull/3576 | 1,102,059,651 | PR_kwDODunzps4w8sUm | 3,576 | Add PASS dataset | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,642,094,167,000 | 1,642,094,167,000 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3576",
"html_url": "https://github.com/huggingface/datasets/pull/3576",
"diff_url": "https://github.com/huggingface/datasets/pull/3576.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3576.patch",
"merged_at": null
} | This PR adds the PASS dataset.
Closes #3043 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3576/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3576/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3575 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3575/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3575/comments | https://api.github.com/repos/huggingface/datasets/issues/3575/events | https://github.com/huggingface/datasets/pull/3575 | 1,101,947,955 | PR_kwDODunzps4w8Usm | 3,575 | Add Arrow type casting to struct for Image and Audio + Support nested casting | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Regarding the tests I'm just missing the FixedSizeListType type casting for ListArray objects, will to it tomorrow as well as adding new tests + docstrings\r\n\r\nand also adding soundfile in the CI",
"While writing some tests I noticed that the ExtensionArray can't be directly concatenated - maybe we can get rid of the extension types/arrays and only keep their storages in native arrow types.\r\n\r\nIn this case the `cast_storage` functions should be the responsibility of the Image and Audio classes directly. And therefore we would need two never cast to a pyarrow type again but to a HF feature - since they'd end up being the one able to tell what's castable or not. This is fine in my opinion but let me know what you think. I can take care of this on monday I think",
"Alright I got rid of all the extension type stuff, I'm writing the new tests now :)"
] | 1,642,088,219,000 | 1,642,505,921,000 | null | MEMBER | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3575",
"html_url": "https://github.com/huggingface/datasets/pull/3575",
"diff_url": "https://github.com/huggingface/datasets/pull/3575.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3575.patch",
"merged_at": null
} | ## Intro
1. Currently, it's not possible to have nested features containing Audio or Image.
2. Moreover one can keep an Arrow array as a StringArray to store paths to images, but such arrays can't be directly concatenated to another image array if it's stored an another Arrow type (typically, a StructType).
3. Allowing several Arrow types for a single HF feature type also leads to bugs like this one #3497
4. Issues like #3247 are quite frequent and happen when Arrow fails to reorder StructArrays.
5. Casting Audio feature type is blocking preparation for the ASR task template: https://github.com/huggingface/datasets/pull/3364
All those issues are linked together by the fact that:
- we are limited by the Arrow type casting which is lacking features for nested types.
- and especially for Audio and Image: they are not robust enough for concatenation and feature inference.
## Proposed solution
To fix 1 and 4 I implemented nested array type casting (which is missing in PyArrow).
To fix 2, 3 and 5 while having a simple implementation for nested array type casting, I changed the storage type of Audio and Image to always be a StructType. Also casting from StringType is directly implemented via a new function `cast_storage` that is defined individually for Audio and Image. I also added nested decoding.
## Implementation details
### I. Better Arrow data type casting for nested data structures
I implemented new functions `array_cast` and `table_cast` that do the exact same as `pyarrow.Array.cast` or `pyarrow.Table.cast` but support nested struct casting and array re-ordering.
These functions can be used on PyArrow objects, and are already integrated in our own `datasets.table.Table.cast` functions. So one can do `my_dataset.data.cast(pyarrow_schema_with_custom_hf_types)` directly.
### II. New image and audio extension types with custom casting
I used PyArrow extension types to be able to define what casting is allowed or not. For example both StringType->ImageExtensionType and StructType->ImageExtensionType are allowed, via the `cast_storage` method.
I factorized all the PyArrow + Pandas extension stuff in the `base_extension.py` file. This aims at separating the front-facing API code of `datasets` from the Arrow back-end which requires advanced knowledge.
### III. Nested feature decoding
I added a new function `decode_nested_example` to decode image and audio data in nested data structures. For optimization's sake, this function is only called if a column has at least one feature that requires decoding.
## Alternative considered
The casting to struct type could have been done directly with python objects using some Audio and Image methods, but bringing arrow data to python objects is expensive. The Audio and Image types could also have been able to convert the arrow data directly, but this is not convenient to use when casting a full Arrow Table with nested fields. Therefore I decided to keep the Arrow data casting logic in Arrow extension types.
## Future work
This work can be used to allow the ArrayND feature types to be nested too (see issue #887)
## TODO
- [ ] fix current tests
- [ ] add new tests
- [ ] docstrings/comments | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3575/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3575/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3574 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3574/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3574/comments | https://api.github.com/repos/huggingface/datasets/issues/3574/events | https://github.com/huggingface/datasets/pull/3574 | 1,101,781,401 | PR_kwDODunzps4w7vu6 | 3,574 | Fix qa4mre tags | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,642,082,219,000 | 1,642,082,582,000 | 1,642,082,581,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3574",
"html_url": "https://github.com/huggingface/datasets/pull/3574",
"diff_url": "https://github.com/huggingface/datasets/pull/3574.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3574.patch",
"merged_at": 1642082581000
} | The YAML tags were invalid. I also fixed the dataset mirroring logging that failed because of this issue [here](https://github.com/huggingface/datasets/actions/runs/1690109581) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3574/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3574/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3573 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3573/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3573/comments | https://api.github.com/repos/huggingface/datasets/issues/3573/events | https://github.com/huggingface/datasets/pull/3573 | 1,101,157,676 | PR_kwDODunzps4w5oE_ | 3,573 | Add Mauve metric | {
"login": "jthickstun",
"id": 2321244,
"node_id": "MDQ6VXNlcjIzMjEyNDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2321244?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jthickstun",
"html_url": "https://github.com/jthickstun",
"followers_url": "https://api.github.com/users/jthickstun/followers",
"following_url": "https://api.github.com/users/jthickstun/following{/other_user}",
"gists_url": "https://api.github.com/users/jthickstun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jthickstun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jthickstun/subscriptions",
"organizations_url": "https://api.github.com/users/jthickstun/orgs",
"repos_url": "https://api.github.com/users/jthickstun/repos",
"events_url": "https://api.github.com/users/jthickstun/events{/privacy}",
"received_events_url": "https://api.github.com/users/jthickstun/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,642,045,968,000 | 1,642,181,538,000 | null | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3573",
"html_url": "https://github.com/huggingface/datasets/pull/3573",
"diff_url": "https://github.com/huggingface/datasets/pull/3573.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3573.patch",
"merged_at": null
} | Add support for the [Mauve](https://github.com/krishnap25/mauve) metric introduced in this [paper](https://arxiv.org/pdf/2102.01454.pdf) (Neurips, 2021). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3573/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3573/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3572 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3572/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3572/comments | https://api.github.com/repos/huggingface/datasets/issues/3572/events | https://github.com/huggingface/datasets/issues/3572 | 1,100,634,244 | I_kwDODunzps5BmliE | 3,572 | ConnectionError: Couldn't reach https://storage.googleapis.com/ai4bharat-public-indic-nlp-corpora/evaluations/wikiann-ner.tar.gz (error 403) | {
"login": "sahoodib",
"id": 79107194,
"node_id": "MDQ6VXNlcjc5MTA3MTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/79107194?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sahoodib",
"html_url": "https://github.com/sahoodib",
"followers_url": "https://api.github.com/users/sahoodib/followers",
"following_url": "https://api.github.com/users/sahoodib/following{/other_user}",
"gists_url": "https://api.github.com/users/sahoodib/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sahoodib/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sahoodib/subscriptions",
"organizations_url": "https://api.github.com/users/sahoodib/orgs",
"repos_url": "https://api.github.com/users/sahoodib/repos",
"events_url": "https://api.github.com/users/sahoodib/events{/privacy}",
"received_events_url": "https://api.github.com/users/sahoodib/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | open | false | null | [] | null | [] | 1,642,010,376,000 | 1,642,425,328,000 | null | NONE | null | null | null | ## Adding a Dataset
- **Name:**IndicGLUE**
- **Description:** *natural language understanding benchmark for Indian languages*
- **Paper:** *(https://indicnlp.ai4bharat.org/home/)*
- **Data:** *https://huggingface.co/datasets/indic_glue#data-fields*
- **Motivation:** *I am trying to train my model on Indian languages*
While I am trying to load dataset it is giving me with the above error. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3572/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3572/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3571 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3571/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3571/comments | https://api.github.com/repos/huggingface/datasets/issues/3571/events | https://github.com/huggingface/datasets/pull/3571 | 1,100,519,604 | PR_kwDODunzps4w3fVQ | 3,571 | Add missing tasks to MuchoCine dataset | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,642,003,652,000 | 1,642,003,652,000 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3571",
"html_url": "https://github.com/huggingface/datasets/pull/3571",
"diff_url": "https://github.com/huggingface/datasets/pull/3571.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3571.patch",
"merged_at": null
} | Addresses the 2nd bullet point in #2520.
I'm also removing the licensing information, because I couldn't verify that it is correct. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3571/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3571/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3570 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3570/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3570/comments | https://api.github.com/repos/huggingface/datasets/issues/3570/events | https://github.com/huggingface/datasets/pull/3570 | 1,100,480,791 | PR_kwDODunzps4w3Xez | 3,570 | Add the KMWP dataset (extension of #3564) | {
"login": "sooftware",
"id": 42150335,
"node_id": "MDQ6VXNlcjQyMTUwMzM1",
"avatar_url": "https://avatars.githubusercontent.com/u/42150335?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sooftware",
"html_url": "https://github.com/sooftware",
"followers_url": "https://api.github.com/users/sooftware/followers",
"following_url": "https://api.github.com/users/sooftware/following{/other_user}",
"gists_url": "https://api.github.com/users/sooftware/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sooftware/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sooftware/subscriptions",
"organizations_url": "https://api.github.com/users/sooftware/orgs",
"repos_url": "https://api.github.com/users/sooftware/repos",
"events_url": "https://api.github.com/users/sooftware/events{/privacy}",
"received_events_url": "https://api.github.com/users/sooftware/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,642,001,588,000 | 1,642,174,881,000 | null | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3570",
"html_url": "https://github.com/huggingface/datasets/pull/3570",
"diff_url": "https://github.com/huggingface/datasets/pull/3570.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3570.patch",
"merged_at": null
} | New pull request of #3564 (Add the KMWP dataset) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3570/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3570/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3569 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3569/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3569/comments | https://api.github.com/repos/huggingface/datasets/issues/3569/events | https://github.com/huggingface/datasets/pull/3569 | 1,100,478,994 | PR_kwDODunzps4w3XGo | 3,569 | Add the DKTC dataset (Extension of #3564) | {
"login": "sooftware",
"id": 42150335,
"node_id": "MDQ6VXNlcjQyMTUwMzM1",
"avatar_url": "https://avatars.githubusercontent.com/u/42150335?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sooftware",
"html_url": "https://github.com/sooftware",
"followers_url": "https://api.github.com/users/sooftware/followers",
"following_url": "https://api.github.com/users/sooftware/following{/other_user}",
"gists_url": "https://api.github.com/users/sooftware/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sooftware/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sooftware/subscriptions",
"organizations_url": "https://api.github.com/users/sooftware/orgs",
"repos_url": "https://api.github.com/users/sooftware/repos",
"events_url": "https://api.github.com/users/sooftware/events{/privacy}",
"received_events_url": "https://api.github.com/users/sooftware/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"I reflect your comment! @lhoestq ",
"Wait, the format of the data just changed, so I'll take it into consideration and commit it.",
"I update the code according to the dataset structure change.",
"Thanks ! I think the dummy data are not valid yet - the dummy train.csv file only contains a partial example (the quotes `\"` start but never end).",
"> Thanks ! I think the dummy data are not valid yet - the dummy train.csv file only contains a partial example (the quotes `\"` start but never end).\r\n\r\nHi! @lhoestq There is a problem. \r\n<img src=\"https://user-images.githubusercontent.com/42150335/149804142-3800e635-f5a0-44d9-9694-0c2b0c05f16b.png\" width=500>\r\n \r\nAs shown in the picture above, the conversation is divided into \"\\n\" in the \"conversion\" column. \r\nThat's why there's an error in the file path that only saved only five lines like below. \r\n\r\n```\r\n'idx', 'class', 'conversation'\r\n'0', '협박 대화', '\"지금 너 스스로를 죽여달라고 애원하는 것인가?'\r\n아닙니다. 죄송합니다.'\r\n죽을 거면 혼자 죽지 우리까지 사건에 휘말리게 해? 진짜 죽여버리고 싶게.'\r\n정말 잘못했습니다.\r\n```\r\n \r\nIn fact, these five lines are all one line. \r\n \r\n\r\n"
] | 1,642,001,489,000 | 1,642,436,184,000 | null | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3569",
"html_url": "https://github.com/huggingface/datasets/pull/3569",
"diff_url": "https://github.com/huggingface/datasets/pull/3569.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3569.patch",
"merged_at": null
} | New pull request of #3564. (for DKTC)
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3569/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3569/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3568 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3568/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3568/comments | https://api.github.com/repos/huggingface/datasets/issues/3568/events | https://github.com/huggingface/datasets/issues/3568 | 1,100,380,631 | I_kwDODunzps5BlnnX | 3,568 | Downloading Hugging Face Medical Dialog Dataset NonMatchingSplitsSizesError | {
"login": "fabianslife",
"id": 49265757,
"node_id": "MDQ6VXNlcjQ5MjY1NzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/49265757?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fabianslife",
"html_url": "https://github.com/fabianslife",
"followers_url": "https://api.github.com/users/fabianslife/followers",
"following_url": "https://api.github.com/users/fabianslife/following{/other_user}",
"gists_url": "https://api.github.com/users/fabianslife/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fabianslife/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fabianslife/subscriptions",
"organizations_url": "https://api.github.com/users/fabianslife/orgs",
"repos_url": "https://api.github.com/users/fabianslife/repos",
"events_url": "https://api.github.com/users/fabianslife/events{/privacy}",
"received_events_url": "https://api.github.com/users/fabianslife/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | open | false | null | [] | null | [] | 1,641,996,224,000 | 1,642,425,341,000 | null | NONE | null | null | null | I wanted to download the Nedical Dialog Dataset from huggingface, using this github link:
https://github.com/huggingface/datasets/tree/master/datasets/medical_dialog
After downloading the raw datasets from google drive, i unpacked everything and put it in the same folder as the medical_dialog.py which is:
```
import copy
import os
import re
import datasets
_CITATION = """\
@article{chen2020meddiag,
title={MedDialog: a large-scale medical dialogue dataset},
author={Chen, Shu and Ju, Zeqian and Dong, Xiangyu and Fang, Hongchao and Wang, Sicheng and Yang, Yue and Zeng, Jiaqi and Zhang, Ruisi and Zhang, Ruoyu and Zhou, Meng and Zhu, Penghui and Xie, Pengtao},
journal={arXiv preprint arXiv:2004.03329},
year={2020}
}
"""
_DESCRIPTION = """\
The MedDialog dataset (English) contains conversations (in English) between doctors and patients.\
It has 0.26 million dialogues. The data is continuously growing and more dialogues will be added. \
The raw dialogues are from healthcaremagic.com and icliniq.com.\
All copyrights of the data belong to healthcaremagic.com and icliniq.com.
"""
_HOMEPAGE = "https://github.com/UCSD-AI4H/Medical-Dialogue-System"
_LICENSE = ""
class MedicalDialog(datasets.GeneratorBasedBuilder):
VERSION = datasets.Version("1.0.0")
BUILDER_CONFIGS = [
datasets.BuilderConfig(name="en", description="The dataset of medical dialogs in English.", version=VERSION),
datasets.BuilderConfig(name="zh", description="The dataset of medical dialogs in Chinese.", version=VERSION),
]
@property
def manual_download_instructions(self):
return """\
\n For English:\nYou need to go to https://drive.google.com/drive/folders/1g29ssimdZ6JzTST6Y8g6h-ogUNReBtJD?usp=sharing,\
and manually download the dataset from Google Drive. Once it is completed,
a file named Medical-Dialogue-Dataset-English-<timestamp-info>.zip will appear in your Downloads folder(
or whichever folder your browser chooses to save files to). Unzip the folder to obtain
a folder named "Medical-Dialogue-Dataset-English" several text files.
Now, you can specify the path to this folder for the data_dir argument in the
datasets.load_dataset(...) option.
The <path/to/folder> can e.g. be "/Downloads/Medical-Dialogue-Dataset-English".
The data can then be loaded using the below command:\
datasets.load_dataset("medical_dialog", name="en", data_dir="/Downloads/Medical-Dialogue-Dataset-English")`.
\n For Chinese:\nFollow the above process. Change the 'name' to 'zh'.The download link is https://drive.google.com/drive/folders/1r09_i8nJ9c1nliXVGXwSqRYqklcHd9e2
**NOTE**
- A caution while downloading from drive. It is better to download single files since creating a zip might not include files <500 MB. This has been observed mutiple times.
- After downloading the files and adding them to the appropriate folder, the path of the folder can be given as input tu the data_dir path.
"""
datasets.load_dataset("medical_dialog", name="en", data_dir="Medical-Dialogue-Dataset-English")
def _info(self):
if self.config.name == "zh":
features = datasets.Features(
{
"file_name": datasets.Value("string"),
"dialogue_id": datasets.Value("int32"),
"dialogue_url": datasets.Value("string"),
"dialogue_turns": datasets.Sequence(
{
"speaker": datasets.ClassLabel(names=["病人", "医生"]),
"utterance": datasets.Value("string"),
}
),
}
)
if self.config.name == "en":
features = datasets.Features(
{
"file_name": datasets.Value("string"),
"dialogue_id": datasets.Value("int32"),
"dialogue_url": datasets.Value("string"),
"dialogue_turns": datasets.Sequence(
{
"speaker": datasets.ClassLabel(names=["Patient", "Doctor"]),
"utterance": datasets.Value("string"),
}
),
}
)
return datasets.DatasetInfo(
# This is the description that will appear on the datasets page.
description=_DESCRIPTION,
features=features,
supervised_keys=None,
# Homepage of the dataset for documentation
homepage=_HOMEPAGE,
# License for the dataset if available
license=_LICENSE,
# Citation for the dataset
citation=_CITATION,
)
def _split_generators(self, dl_manager):
"""Returns SplitGenerators."""
path_to_manual_file = os.path.abspath(os.path.expanduser(dl_manager.manual_dir))
if not os.path.exists(path_to_manual_file):
raise FileNotFoundError(
f"{path_to_manual_file} does not exist. Make sure you insert a manual dir via `datasets.load_dataset('medical_dialog', data_dir=...)`. Manual download instructions: {self.manual_download_instructions})"
)
filepaths = [
os.path.join(path_to_manual_file, txt_file_name)
for txt_file_name in sorted(os.listdir(path_to_manual_file))
if txt_file_name.endswith("txt")
]
return [datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepaths": filepaths})]
def _generate_examples(self, filepaths):
"""Yields examples. Iterates over each file and give the creates the corresponding features.
NOTE:
- The code makes some assumption on the structure of the raw .txt file.
- There are some checks to separate different id's. Hopefully, should not cause further issues later when more txt files are added.
"""
data_lang = self.config.name
id_ = -1
for filepath in filepaths:
with open(filepath, encoding="utf-8") as f_in:
# Parameters to just "sectionize" the raw data
last_part = ""
last_dialog = {}
last_list = []
last_user = ""
check_list = []
# These flags are present to have a single function address both chinese and english data
# English data is a little hahazard (i.e. the sentences spans multiple different lines),
# Chinese is compact with one line for doctor and patient.
conv_flag = False
des_flag = False
while True:
line = f_in.readline()
if not line:
break
# Extracting the dialog id
if line[:2] == "id": # Hardcode alert!
# Handling ID references that may come in the description
# These were observed in the Chinese dataset and were not
# followed by numbers
try:
dialogue_id = int(re.findall(r"\d+", line)[0])
except IndexError:
continue
# Extracting the url
if line[:4] == "http": # Hardcode alert!
dialogue_url = line.rstrip()
# Extracting the patient info from description.
if line[:11] == "Description": # Hardcode alert!
last_part = "description"
last_dialog = {}
last_list = []
last_user = ""
last_conv = {"speaker": "", "utterance": ""}
while True:
line = f_in.readline()
if (not line) or (line in ["\n", "\n\r"]):
break
else:
if data_lang == "zh": # Condition in chinese
if line[:5] == "病情描述:": # Hardcode alert!
last_user = "病人"
sen = f_in.readline().rstrip()
des_flag = True
if data_lang == "en":
last_user = "Patient"
sen = line.rstrip()
des_flag = True
if des_flag:
if sen == "":
continue
if sen in check_list:
last_conv["speaker"] = ""
last_conv["utterance"] = ""
else:
last_conv["speaker"] = last_user
last_conv["utterance"] = sen
check_list.append(sen)
des_flag = False
break
# Extracting the conversation info from dialogue.
elif line[:8] == "Dialogue": # Hardcode alert!
if last_part == "description" and len(last_conv["utterance"]) > 0:
last_part = "dialogue"
if data_lang == "zh":
last_user = "病人"
if data_lang == "en":
last_user = "Patient"
while True:
line = f_in.readline()
if (not line) or (line in ["\n", "\n\r"]):
conv_flag = False
last_user = ""
last_list.append(copy.deepcopy(last_conv))
# To ensure close of conversation, only even number of sentences
# are extracted
last_turn = len(last_list)
if int(last_turn / 2) > 0:
temp = int(last_turn / 2)
id_ += 1
last_dialog["file_name"] = filepath
last_dialog["dialogue_id"] = dialogue_id
last_dialog["dialogue_url"] = dialogue_url
last_dialog["dialogue_turns"] = last_list[: temp * 2]
yield id_, last_dialog
break
if data_lang == "zh":
if line[:3] == "病人:" or line[:3] == "医生:": # Hardcode alert!
user = line[:2] # Hardcode alert!
line = f_in.readline()
conv_flag = True
# The elif block is to ensure that multi-line sentences are captured.
# This has been observed only in english.
if data_lang == "en":
if line.strip() == "Patient:" or line.strip() == "Doctor:": # Hardcode alert!
user = line.replace(":", "").rstrip()
line = f_in.readline()
conv_flag = True
elif line[:2] != "id": # Hardcode alert!
conv_flag = True
# Continues till the next ID is parsed
if conv_flag:
sen = line.rstrip()
if sen == "":
continue
if user == last_user:
last_conv["utterance"] = last_conv["utterance"] + sen
else:
last_user = user
last_list.append(copy.deepcopy(last_conv))
last_conv["utterance"] = sen
last_conv["speaker"] = user
```
running this code gives me the error:
```
File "C:\Users\Fabia\AppData\Local\Programs\Python\Python39\lib\site-packages\datasets\utils\info_utils.py", line 74, in verify_splits
raise NonMatchingSplitsSizesError(str(bad_splits))
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='medical_dialog'), 'recorded': SplitInfo(name='train', num_bytes=292801173, num_examples=229674, dataset_name='medical_dialog')}]
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3568/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3568/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3567 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3567/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3567/comments | https://api.github.com/repos/huggingface/datasets/issues/3567/events | https://github.com/huggingface/datasets/pull/3567 | 1,100,296,696 | PR_kwDODunzps4w2xDl | 3,567 | Fix push to hub to allow individual split push | {
"login": "thomasw21",
"id": 24695242,
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomasw21",
"html_url": "https://github.com/thomasw21",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions",
"organizations_url": "https://api.github.com/users/thomasw21/orgs",
"repos_url": "https://api.github.com/users/thomasw21/repos",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomasw21/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,641,991,378,000 | 1,641,994,141,000 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3567",
"html_url": "https://github.com/huggingface/datasets/pull/3567",
"diff_url": "https://github.com/huggingface/datasets/pull/3567.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3567.patch",
"merged_at": null
} | # Description of the issue
If one decides to push a split on a datasets repo, he uploads the dataset and overrides the config. However previous config splits end up being lost despite still having the dataset necessary.
The new flow is the following:
- query the old config from the repo
- update into a new config (add/overwrite new split for example)
- push the new config
# Side fix
- `repo_id` in HfFileSystem was wrongly typed.
- I've added `indent=2` as it becomes much easier to read now.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3567/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3567/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3566 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3566/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3566/comments | https://api.github.com/repos/huggingface/datasets/issues/3566/events | https://github.com/huggingface/datasets/pull/3566 | 1,100,155,902 | PR_kwDODunzps4w2Tcc | 3,566 | Add initial electricity time series dataset | {
"login": "kashif",
"id": 8100,
"node_id": "MDQ6VXNlcjgxMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kashif",
"html_url": "https://github.com/kashif",
"followers_url": "https://api.github.com/users/kashif/followers",
"following_url": "https://api.github.com/users/kashif/following{/other_user}",
"gists_url": "https://api.github.com/users/kashif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kashif/subscriptions",
"organizations_url": "https://api.github.com/users/kashif/orgs",
"repos_url": "https://api.github.com/users/kashif/repos",
"events_url": "https://api.github.com/users/kashif/events{/privacy}",
"received_events_url": "https://api.github.com/users/kashif/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,641,982,892,000 | 1,642,501,053,000 | null | NONE | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3566",
"html_url": "https://github.com/huggingface/datasets/pull/3566",
"diff_url": "https://github.com/huggingface/datasets/pull/3566.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3566.patch",
"merged_at": null
} | Here is an initial prototype time series dataset | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3566/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3566/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3565 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3565/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3565/comments | https://api.github.com/repos/huggingface/datasets/issues/3565/events | https://github.com/huggingface/datasets/pull/3565 | 1,099,296,693 | PR_kwDODunzps4wzjhH | 3,565 | Add parameter `preserve_index` to `from_pandas` | {
"login": "Sorrow321",
"id": 20703486,
"node_id": "MDQ6VXNlcjIwNzAzNDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/20703486?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sorrow321",
"html_url": "https://github.com/Sorrow321",
"followers_url": "https://api.github.com/users/Sorrow321/followers",
"following_url": "https://api.github.com/users/Sorrow321/following{/other_user}",
"gists_url": "https://api.github.com/users/Sorrow321/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sorrow321/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sorrow321/subscriptions",
"organizations_url": "https://api.github.com/users/Sorrow321/orgs",
"repos_url": "https://api.github.com/users/Sorrow321/repos",
"events_url": "https://api.github.com/users/Sorrow321/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sorrow321/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"> \r\n\r\nI did `make style` and it affected over 500 files\r\n\r\n```\r\nAll done! ✨ 🍰 ✨\r\n575 files reformatted, 372 files left unchanged.\r\nisort tests src benchmarks datasets/**/*.py metri\r\n```\r\n\r\n(result)\r\n![image](https://user-images.githubusercontent.com/20703486/149166681-2f9d1bc4-116a-4f53-ad42-e54e3b8bd605.png)\r\n",
"Nvm I was using wrong black version"
] | 1,641,914,797,000 | 1,642,003,887,000 | 1,642,003,887,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3565",
"html_url": "https://github.com/huggingface/datasets/pull/3565",
"diff_url": "https://github.com/huggingface/datasets/pull/3565.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3565.patch",
"merged_at": 1642003886000
} | Added optional parameter, so that user can get rid of useless index preserving. [Issue](https://github.com/huggingface/datasets/issues/3563) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3565/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3565/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3564 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3564/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3564/comments | https://api.github.com/repos/huggingface/datasets/issues/3564/events | https://github.com/huggingface/datasets/pull/3564 | 1,099,214,403 | PR_kwDODunzps4wzSOL | 3,564 | Add the KMWP & DKTC dataset. | {
"login": "sooftware",
"id": 42150335,
"node_id": "MDQ6VXNlcjQyMTUwMzM1",
"avatar_url": "https://avatars.githubusercontent.com/u/42150335?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sooftware",
"html_url": "https://github.com/sooftware",
"followers_url": "https://api.github.com/users/sooftware/followers",
"following_url": "https://api.github.com/users/sooftware/following{/other_user}",
"gists_url": "https://api.github.com/users/sooftware/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sooftware/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sooftware/subscriptions",
"organizations_url": "https://api.github.com/users/sooftware/orgs",
"repos_url": "https://api.github.com/users/sooftware/repos",
"events_url": "https://api.github.com/users/sooftware/events{/privacy}",
"received_events_url": "https://api.github.com/users/sooftware/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I reflect your review. cc. @lhoestq ",
"Ah sorry, I missed KMWP comment, wait.",
"I request 2 new pull requests. #3569 #3570"
] | 1,641,910,448,000 | 1,642,001,629,000 | 1,642,001,608,000 | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3564",
"html_url": "https://github.com/huggingface/datasets/pull/3564",
"diff_url": "https://github.com/huggingface/datasets/pull/3564.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3564.patch",
"merged_at": null
} | Add the DKTC dataset.
- https://github.com/tunib-ai/DKTC | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3564/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3564/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3563 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3563/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3563/comments | https://api.github.com/repos/huggingface/datasets/issues/3563/events | https://github.com/huggingface/datasets/issues/3563 | 1,099,070,368 | I_kwDODunzps5Bgnug | 3,563 | Dataset.from_pandas preserves useless index | {
"login": "Sorrow321",
"id": 20703486,
"node_id": "MDQ6VXNlcjIwNzAzNDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/20703486?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sorrow321",
"html_url": "https://github.com/Sorrow321",
"followers_url": "https://api.github.com/users/Sorrow321/followers",
"following_url": "https://api.github.com/users/Sorrow321/following{/other_user}",
"gists_url": "https://api.github.com/users/Sorrow321/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sorrow321/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sorrow321/subscriptions",
"organizations_url": "https://api.github.com/users/Sorrow321/orgs",
"repos_url": "https://api.github.com/users/Sorrow321/repos",
"events_url": "https://api.github.com/users/Sorrow321/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sorrow321/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi! That makes sense. Sure, feel free to open a PR! Just a small suggestion: let's make `preserve_index` a parameter of `Dataset.from_pandas` (which we then pass to `InMemoryTable.from_pandas`) with `None` as a default value to not have this as a breaking change. "
] | 1,641,902,827,000 | 1,642,003,887,000 | 1,642,003,887,000 | CONTRIBUTOR | null | null | null | ## Describe the bug
Let's say that you want to create a Dataset object from pandas dataframe. Most likely you will write something like this:
```
import pandas as pd
from datasets import Dataset
df = pd.read_csv('some_dataset.csv')
# Some DataFrame preprocessing code...
dataset = Dataset.from_pandas(df)
```
If your preprocessing code contain indexing operations like this:
```
df = df[df.col1 == some_value]
```
then your df.index can be changed from (default) ```RangeIndex(start=0, stop=16590, step=1)``` to something like this ```Int64Index([ 0, 1, 2, 3, 4, 5, 6, 7, 8,
9,
...
83979, 83980, 83981, 83982, 83983, 83984, 83985, 83986, 83987,
83988],
dtype='int64', length=16590)```
In this case, PyArrow (by default) will preserve this non-standard index. In the result, your dataset object will have the extra field that you likely don't want to have: '__index_level_0__'.
You can easily fix this by just adding extra argument ```preserve_index=False``` to call of ```InMemoryTable.from_pandas``` in ```arrow_dataset.py```.
If you approve that this isn't desirable behavior, I can make a PR fixing that.
## Environment info
- `datasets` version: 1.16.1
- Platform: Linux-5.11.0-44-generic-x86_64-with-glibc2.31
- Python version: 3.9.7
- PyArrow version: 6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3563/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3563/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3562 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3562/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3562/comments | https://api.github.com/repos/huggingface/datasets/issues/3562/events | https://github.com/huggingface/datasets/pull/3562 | 1,098,341,351 | PR_kwDODunzps4wwa44 | 3,562 | Allow multiple task templates of the same type | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,641,846,727,000 | 1,641,910,607,000 | 1,641,910,607,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3562",
"html_url": "https://github.com/huggingface/datasets/pull/3562",
"diff_url": "https://github.com/huggingface/datasets/pull/3562.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3562.patch",
"merged_at": 1641910606000
} | Add support for multiple task templates of the same type. Fixes (partially) #2520.
CC: @lewtun | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3562/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3562/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3561 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3561/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3561/comments | https://api.github.com/repos/huggingface/datasets/issues/3561/events | https://github.com/huggingface/datasets/issues/3561 | 1,098,328,870 | I_kwDODunzps5Bdysm | 3,561 | Cannot load ‘bookcorpusopen’ | {
"login": "HUIYINXUE",
"id": 54684403,
"node_id": "MDQ6VXNlcjU0Njg0NDAz",
"avatar_url": "https://avatars.githubusercontent.com/u/54684403?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HUIYINXUE",
"html_url": "https://github.com/HUIYINXUE",
"followers_url": "https://api.github.com/users/HUIYINXUE/followers",
"following_url": "https://api.github.com/users/HUIYINXUE/following{/other_user}",
"gists_url": "https://api.github.com/users/HUIYINXUE/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HUIYINXUE/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HUIYINXUE/subscriptions",
"organizations_url": "https://api.github.com/users/HUIYINXUE/orgs",
"repos_url": "https://api.github.com/users/HUIYINXUE/repos",
"events_url": "https://api.github.com/users/HUIYINXUE/events{/privacy}",
"received_events_url": "https://api.github.com/users/HUIYINXUE/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | open | false | null | [] | null | [
"The host of this copy of the dataset (https://the-eye.eu) is down and has been down for a good amount of time ([potentially months](https://www.reddit.com/r/Roms/comments/q82s15/theeye_downdied/))\r\n\r\nFinding this dataset is a little esoteric, as the original authors took down the official BookCorpus dataset some time ago.\r\n\r\nThere are community-created versions of BookCorpus, such as the files hosted in the link below.\r\nhttps://battle.shawwn.com/sdb/bookcorpus/\r\n\r\nAnd more discussion here:\r\nhttps://github.com/soskek/bookcorpus\r\n\r\nDo we want to remove this dataset entirely? There's a fair argument for this, given that the official BookCorpus dataset was taken down by the authors. If not, perhaps can open a PR with the link to the community-created tar above and updated dataset description."
] | 1,641,845,838,000 | 1,642,425,361,000 | null | NONE | null | null | null | ## Describe the bug
Cannot load 'bookcorpusopen'
## Steps to reproduce the bug
```python
dataset = load_dataset('bookcorpusopen')
```
or
```python
dataset = load_dataset('bookcorpusopen',script_version='master')
```
## Actual results
ConnectionError: Couldn't reach https://the-eye.eu/public/AI/pile_preliminary_components/books1.tar.gz
## Environment info
- `datasets` version: 1.9.0
- Platform: Linux version 3.10.0-1160.45.1.el7.x86_64
- Python version: 3.6.13
- PyArrow version: 6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3561/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3561/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3560 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3560/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3560/comments | https://api.github.com/repos/huggingface/datasets/issues/3560/events | https://github.com/huggingface/datasets/pull/3560 | 1,098,280,652 | PR_kwDODunzps4wwOMf | 3,560 | Run pyupgrade for Python 3.6+ | {
"login": "bryant1410",
"id": 3905501,
"node_id": "MDQ6VXNlcjM5MDU1MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bryant1410",
"html_url": "https://github.com/bryant1410",
"followers_url": "https://api.github.com/users/bryant1410/followers",
"following_url": "https://api.github.com/users/bryant1410/following{/other_user}",
"gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions",
"organizations_url": "https://api.github.com/users/bryant1410/orgs",
"repos_url": "https://api.github.com/users/bryant1410/repos",
"events_url": "https://api.github.com/users/bryant1410/events{/privacy}",
"received_events_url": "https://api.github.com/users/bryant1410/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi ! Thanks for the change :)\r\nCould it be possible to only run it for the code in `src/` ? We try to not change the code in the `datasets/` directory too often since it refreshes the users cache when they upgrade `datasets`.",
"> Hi ! Thanks for the change :)\r\n> Could it be possible to only run it for the code in `src/` ? We try to not change the code in the `datasets/` directory too often since it refreshes the users cache when they upgrade `datasets`.\r\n\r\nI reverted the changes in `datasets/` instead of changing only `src/`. Does it sound good?"
] | 1,641,842,453,000 | 1,642,000,307,000 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3560",
"html_url": "https://github.com/huggingface/datasets/pull/3560",
"diff_url": "https://github.com/huggingface/datasets/pull/3560.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3560.patch",
"merged_at": null
} | Run the command:
```bash
pyupgrade $(find . -name "*.py" -type f) --py36-plus
```
Which mainly avoids unnecessary lists creations and also removes unnecessary code for Python 3.6+.
It was originally part of #3489.
Tip for reviewing faster: use the CLI (`git diff`) and scroll. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3560/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3560/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3559 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3559/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3559/comments | https://api.github.com/repos/huggingface/datasets/issues/3559/events | https://github.com/huggingface/datasets/pull/3559 | 1,098,178,222 | PR_kwDODunzps4wv420 | 3,559 | Fix `DuplicatedKeysError` and improve card in `tweet_qa` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,641,835,660,000 | 1,642,000,438,000 | 1,642,000,437,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3559",
"html_url": "https://github.com/huggingface/datasets/pull/3559",
"diff_url": "https://github.com/huggingface/datasets/pull/3559.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3559.patch",
"merged_at": 1642000436000
} | Fix #3555 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3559/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3559/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3558 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3558/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3558/comments | https://api.github.com/repos/huggingface/datasets/issues/3558/events | https://github.com/huggingface/datasets/issues/3558 | 1,098,025,866 | I_kwDODunzps5BcouK | 3,558 | Integrate Milvus (pymilvus) library | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 1,641,828,029,000 | 1,641,828,029,000 | null | CONTRIBUTOR | null | null | null | Milvus is a popular open-source vector database. We should add a new vector index to support this project. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3558/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3558/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3557 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3557/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3557/comments | https://api.github.com/repos/huggingface/datasets/issues/3557/events | https://github.com/huggingface/datasets/pull/3557 | 1,097,946,034 | PR_kwDODunzps4wvIHl | 3,557 | Fix bug in `ImageClassifcation` task template | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The CI failures are unrelated to the changes in this PR.",
"> The CI failures are unrelated to the changes in this PR.\r\n\r\nIt seems that some of the failures are due to the tests on the dataset cards (e.g. CIFAR, MNIST, FASHION_MNIST). Perhaps it's worth addressing those in this PR to avoid confusing downstream developers who branch off `master` and suddenly have a failing CI?",
"@lewtun We only run these tests against the modified datasets on the PR branch, so this will not lead to errors after merging."
] | 1,641,823,799,000 | 1,641,916,072,000 | 1,641,916,072,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3557",
"html_url": "https://github.com/huggingface/datasets/pull/3557",
"diff_url": "https://github.com/huggingface/datasets/pull/3557.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3557.patch",
"merged_at": 1641916072000
} | Fixes a bug in the `ImageClassification` task template which requires specifying class labels twice in dataset scripts. Additionally, this PR refactors the API around the classification task templates for cleaner `labels` handling.
CC: @lewtun @nateraw | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3557/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3557/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3556 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3556/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3556/comments | https://api.github.com/repos/huggingface/datasets/issues/3556/events | https://github.com/huggingface/datasets/pull/3556 | 1,097,907,724 | PR_kwDODunzps4wvALx | 3,556 | Preserve encoding/decoding with features in `Iterable.map` call | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,641,821,540,000 | 1,642,504,325,000 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3556",
"html_url": "https://github.com/huggingface/datasets/pull/3556",
"diff_url": "https://github.com/huggingface/datasets/pull/3556.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3556.patch",
"merged_at": null
} | As described in https://github.com/huggingface/datasets/issues/3505#issuecomment-1004755657, this PR uses a generator expression to encode/decode examples with `features` (which are set to None in `map`) before applying a map transform.
Fix #3505 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3556/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3556/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3555 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3555/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3555/comments | https://api.github.com/repos/huggingface/datasets/issues/3555/events | https://github.com/huggingface/datasets/issues/3555 | 1,097,736,982 | I_kwDODunzps5BbiMW | 3,555 | DuplicatedKeysError when loading tweet_qa dataset | {
"login": "LeonieWeissweiler",
"id": 30300891,
"node_id": "MDQ6VXNlcjMwMzAwODkx",
"avatar_url": "https://avatars.githubusercontent.com/u/30300891?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LeonieWeissweiler",
"html_url": "https://github.com/LeonieWeissweiler",
"followers_url": "https://api.github.com/users/LeonieWeissweiler/followers",
"following_url": "https://api.github.com/users/LeonieWeissweiler/following{/other_user}",
"gists_url": "https://api.github.com/users/LeonieWeissweiler/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LeonieWeissweiler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LeonieWeissweiler/subscriptions",
"organizations_url": "https://api.github.com/users/LeonieWeissweiler/orgs",
"repos_url": "https://api.github.com/users/LeonieWeissweiler/repos",
"events_url": "https://api.github.com/users/LeonieWeissweiler/events{/privacy}",
"received_events_url": "https://api.github.com/users/LeonieWeissweiler/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi, we've just merged the PR with the fix. The fixed version of the dataset can be downloaded as follows:\r\n```python\r\nimport datasets\r\ndset = datasets.load_dataset(\"tweet_qa\", revision=\"master\")\r\n```"
] | 1,641,811,991,000 | 1,642,000,653,000 | 1,642,000,436,000 | NONE | null | null | null | When loading the tweet_qa dataset with `load_dataset('tweet_qa')`, the following error occurs:
`DuplicatedKeysError: FAILURE TO GENERATE DATASET !
Found duplicate Key: 2a167f9e016ba338e1813fed275a6a1e
Keys should be unique and deterministic in nature
`
Might be related to issues #2433 and #2333
- `datasets` version: 1.17.0
- Python version: 3.8.5
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3555/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3555/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3554 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3554/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3554/comments | https://api.github.com/repos/huggingface/datasets/issues/3554/events | https://github.com/huggingface/datasets/issues/3554 | 1,097,711,367 | I_kwDODunzps5Bbb8H | 3,554 | ImportError: cannot import name 'is_valid_waiter_error' | {
"login": "danielbellhv",
"id": 84714841,
"node_id": "MDQ6VXNlcjg0NzE0ODQx",
"avatar_url": "https://avatars.githubusercontent.com/u/84714841?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danielbellhv",
"html_url": "https://github.com/danielbellhv",
"followers_url": "https://api.github.com/users/danielbellhv/followers",
"following_url": "https://api.github.com/users/danielbellhv/following{/other_user}",
"gists_url": "https://api.github.com/users/danielbellhv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/danielbellhv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danielbellhv/subscriptions",
"organizations_url": "https://api.github.com/users/danielbellhv/orgs",
"repos_url": "https://api.github.com/users/danielbellhv/repos",
"events_url": "https://api.github.com/users/danielbellhv/events{/privacy}",
"received_events_url": "https://api.github.com/users/danielbellhv/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [] | 1,641,810,724,000 | 1,641,810,724,000 | null | NONE | null | null | null | Based on [SO post](https://stackoverflow.com/q/70606147/17840900).
I'm following along to this [Notebook][1], cell "**Loading the dataset**".
Kernel: `conda_pytorch_p36`.
I run:
```
! pip install datasets transformers optimum[intel]
```
Output:
```
Requirement already satisfied: datasets in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (1.17.0)
Requirement already satisfied: transformers in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (4.15.0)
Requirement already satisfied: optimum[intel] in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (0.1.3)
Requirement already satisfied: numpy>=1.17 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (1.19.5)
Requirement already satisfied: dill in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (0.3.4)
Requirement already satisfied: tqdm>=4.62.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (4.62.3)
Requirement already satisfied: huggingface-hub<1.0.0,>=0.1.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (0.2.1)
Requirement already satisfied: packaging in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (21.3)
Requirement already satisfied: pyarrow!=4.0.0,>=3.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (6.0.1)
Requirement already satisfied: pandas in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (1.1.5)
Requirement already satisfied: xxhash in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (2.0.2)
Requirement already satisfied: aiohttp in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (3.8.1)
Requirement already satisfied: fsspec[http]>=2021.05.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (2021.11.1)
Requirement already satisfied: dataclasses in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (0.8)
Requirement already satisfied: multiprocess in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (0.70.12.2)
Requirement already satisfied: importlib-metadata in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (4.5.0)
Requirement already satisfied: requests>=2.19.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (2.25.1)
Requirement already satisfied: pyyaml>=5.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from transformers) (5.4.1)
Requirement already satisfied: regex!=2019.12.17 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from transformers) (2021.4.4)
Requirement already satisfied: tokenizers<0.11,>=0.10.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from transformers) (0.10.3)
Requirement already satisfied: filelock in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from transformers) (3.0.12)
Requirement already satisfied: sacremoses in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from transformers) (0.0.46)
Requirement already satisfied: torch>=1.9 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from optimum[intel]) (1.10.1)
Requirement already satisfied: sympy in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from optimum[intel]) (1.8)
Requirement already satisfied: coloredlogs in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from optimum[intel]) (15.0.1)
Requirement already satisfied: pycocotools in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from optimum[intel]) (2.0.3)
Requirement already satisfied: neural-compressor>=1.7 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from optimum[intel]) (1.9)
Requirement already satisfied: typing-extensions>=3.7.4.3 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from huggingface-hub<1.0.0,>=0.1.0->datasets) (3.10.0.0)
Requirement already satisfied: sigopt in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (8.2.0)
Requirement already satisfied: opencv-python in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (4.5.1.48)
Requirement already satisfied: cryptography in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (3.4.7)
Requirement already satisfied: py-cpuinfo in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (8.0.0)
Requirement already satisfied: gevent in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (21.1.2)
Requirement already satisfied: schema in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (0.7.5)
Requirement already satisfied: psutil in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (5.8.0)
Requirement already satisfied: gevent-websocket in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (0.10.1)
Requirement already satisfied: hyperopt in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (0.2.7)
Requirement already satisfied: Flask in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (2.0.1)
Requirement already satisfied: prettytable in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (2.5.0)
Requirement already satisfied: Flask-SocketIO in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (5.1.1)
Requirement already satisfied: scikit-learn in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (0.24.2)
Requirement already satisfied: Pillow in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (8.4.0)
Requirement already satisfied: Flask-Cors in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (3.0.10)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from packaging->datasets) (2.4.7)
Requirement already satisfied: chardet<5,>=3.0.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from requests>=2.19.0->datasets) (4.0.0)
Requirement already satisfied: certifi>=2017.4.17 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from requests>=2.19.0->datasets) (2021.5.30)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from requests>=2.19.0->datasets) (1.26.5)
Requirement already satisfied: idna<3,>=2.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from requests>=2.19.0->datasets) (2.10)
Requirement already satisfied: yarl<2.0,>=1.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (1.6.3)
Requirement already satisfied: charset-normalizer<3.0,>=2.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (2.0.9)
Requirement already satisfied: attrs>=17.3.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (21.2.0)
Requirement already satisfied: asynctest==0.13.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (0.13.0)
Requirement already satisfied: idna-ssl>=1.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (1.1.0)
Requirement already satisfied: async-timeout<5.0,>=4.0.0a3 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (4.0.1)
Requirement already satisfied: aiosignal>=1.1.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (1.2.0)
Requirement already satisfied: frozenlist>=1.1.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (1.2.0)
Requirement already satisfied: multidict<7.0,>=4.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (5.1.0)
Requirement already satisfied: humanfriendly>=9.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from coloredlogs->optimum[intel]) (10.0)
Requirement already satisfied: zipp>=0.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata->datasets) (3.4.1)
Requirement already satisfied: python-dateutil>=2.7.3 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from pandas->datasets) (2.8.1)
Requirement already satisfied: pytz>=2017.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from pandas->datasets) (2021.1)
Requirement already satisfied: matplotlib>=2.1.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from pycocotools->optimum[intel]) (3.3.4)
Requirement already satisfied: cython>=0.27.3 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from pycocotools->optimum[intel]) (0.29.23)
Requirement already satisfied: setuptools>=18.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from pycocotools->optimum[intel]) (52.0.0.post20210125)
Requirement already satisfied: joblib in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sacremoses->transformers) (1.0.1)
Requirement already satisfied: click in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sacremoses->transformers) (8.0.1)
Requirement already satisfied: six in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sacremoses->transformers) (1.16.0)
Requirement already satisfied: mpmath>=0.19 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sympy->optimum[intel]) (1.2.1)
Requirement already satisfied: kiwisolver>=1.0.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from matplotlib>=2.1.0->pycocotools->optimum[intel]) (1.3.1)
Requirement already satisfied: cycler>=0.10 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/cycler-0.10.0-py3.6.egg (from matplotlib>=2.1.0->pycocotools->optimum[intel]) (0.10.0)
Requirement already satisfied: cffi>=1.12 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from cryptography->neural-compressor>=1.7->optimum[intel]) (1.14.5)
Requirement already satisfied: Werkzeug>=2.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from Flask->neural-compressor>=1.7->optimum[intel]) (2.0.2)
Requirement already satisfied: Jinja2>=3.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from Flask->neural-compressor>=1.7->optimum[intel]) (3.0.1)
Requirement already satisfied: itsdangerous>=2.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from Flask->neural-compressor>=1.7->optimum[intel]) (2.0.1)
Requirement already satisfied: python-socketio>=5.0.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from Flask-SocketIO->neural-compressor>=1.7->optimum[intel]) (5.5.0)
Requirement already satisfied: zope.event in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from gevent->neural-compressor>=1.7->optimum[intel]) (4.5.0)
Requirement already satisfied: greenlet<2.0,>=0.4.17 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from gevent->neural-compressor>=1.7->optimum[intel]) (1.1.0)
Requirement already satisfied: zope.interface in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from gevent->neural-compressor>=1.7->optimum[intel]) (5.4.0)
Requirement already satisfied: future in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from hyperopt->neural-compressor>=1.7->optimum[intel]) (0.18.2)
Requirement already satisfied: cloudpickle in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from hyperopt->neural-compressor>=1.7->optimum[intel]) (1.6.0)
Requirement already satisfied: networkx>=2.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from hyperopt->neural-compressor>=1.7->optimum[intel]) (2.5)
Requirement already satisfied: scipy in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from hyperopt->neural-compressor>=1.7->optimum[intel]) (1.5.3)
Requirement already satisfied: py4j in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from hyperopt->neural-compressor>=1.7->optimum[intel]) (0.10.7)
Requirement already satisfied: wcwidth in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from prettytable->neural-compressor>=1.7->optimum[intel]) (0.2.5)
Requirement already satisfied: contextlib2>=0.5.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from schema->neural-compressor>=1.7->optimum[intel]) (0.6.0.post1)
Requirement already satisfied: threadpoolctl>=2.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from scikit-learn->neural-compressor>=1.7->optimum[intel]) (2.1.0)
Requirement already satisfied: pyOpenSSL>=20.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (20.0.1)
Requirement already satisfied: pypng>=0.0.20 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (0.0.21)
Requirement already satisfied: kubernetes<13.0.0,>=12.0.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (12.0.1)
Requirement already satisfied: rsa<5.0.0,>=4.7 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (4.7.2)
Requirement already satisfied: boto3<2.0.0,==1.16.34 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (1.16.34)
Requirement already satisfied: Pint<0.17.0,>=0.16.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (0.16.1)
Requirement already satisfied: GitPython>=2.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (3.1.18)
Requirement already satisfied: backoff<2.0.0,>=1.10.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (1.11.1)
Requirement already satisfied: ipython>=5.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (7.16.1)
Requirement already satisfied: docker<5.0.0,>=4.4.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (4.4.4)
Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3<2.0.0,==1.16.34->sigopt->neural-compressor>=1.7->optimum[intel]) (0.10.0)
Requirement already satisfied: s3transfer<0.4.0,>=0.3.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3<2.0.0,==1.16.34->sigopt->neural-compressor>=1.7->optimum[intel]) (0.3.7)
Requirement already satisfied: botocore<1.20.0,>=1.19.34 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3<2.0.0,==1.16.34->sigopt->neural-compressor>=1.7->optimum[intel]) (1.19.63)
Requirement already satisfied: pycparser in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from cffi>=1.12->cryptography->neural-compressor>=1.7->optimum[intel]) (2.20)
Requirement already satisfied: websocket-client>=0.32.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from docker<5.0.0,>=4.4.0->sigopt->neural-compressor>=1.7->optimum[intel]) (0.58.0)
Requirement already satisfied: gitdb<5,>=4.0.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from GitPython>=2.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (4.0.9)
Requirement already satisfied: traitlets>=4.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (4.3.3)
Requirement already satisfied: jedi>=0.10 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (0.17.2)
Requirement already satisfied: prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (3.0.19)
Requirement already satisfied: backcall in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (0.2.0)
Requirement already satisfied: pygments in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (2.9.0)
Requirement already satisfied: pexpect in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (4.8.0)
Requirement already satisfied: decorator in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (5.0.9)
Requirement already satisfied: pickleshare in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (0.7.5)
Requirement already satisfied: MarkupSafe>=2.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from Jinja2>=3.0->Flask->neural-compressor>=1.7->optimum[intel]) (2.0.1)
Requirement already satisfied: google-auth>=1.0.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from kubernetes<13.0.0,>=12.0.1->sigopt->neural-compressor>=1.7->optimum[intel]) (1.30.2)
Requirement already satisfied: requests-oauthlib in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from kubernetes<13.0.0,>=12.0.1->sigopt->neural-compressor>=1.7->optimum[intel]) (1.3.0)
Requirement already satisfied: importlib-resources in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from Pint<0.17.0,>=0.16.0->sigopt->neural-compressor>=1.7->optimum[intel]) (5.4.0)
Requirement already satisfied: python-engineio>=4.3.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from python-socketio>=5.0.2->Flask-SocketIO->neural-compressor>=1.7->optimum[intel]) (4.3.0)
Requirement already satisfied: bidict>=0.21.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from python-socketio>=5.0.2->Flask-SocketIO->neural-compressor>=1.7->optimum[intel]) (0.21.4)
Requirement already satisfied: pyasn1>=0.1.3 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from rsa<5.0.0,>=4.7->sigopt->neural-compressor>=1.7->optimum[intel]) (0.4.8)
Requirement already satisfied: smmap<6,>=3.0.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from gitdb<5,>=4.0.1->GitPython>=2.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (5.0.0)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from google-auth>=1.0.1->kubernetes<13.0.0,>=12.0.1->sigopt->neural-compressor>=1.7->optimum[intel]) (0.2.8)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from google-auth>=1.0.1->kubernetes<13.0.0,>=12.0.1->sigopt->neural-compressor>=1.7->optimum[intel]) (4.2.2)
Requirement already satisfied: parso<0.8.0,>=0.7.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from jedi>=0.10->ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (0.7.1)
Requirement already satisfied: ipython-genutils in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from traitlets>=4.2->ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (0.2.0)
Requirement already satisfied: ptyprocess>=0.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from pexpect->ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (0.7.0)
Requirement already satisfied: oauthlib>=3.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from requests-oauthlib->kubernetes<13.0.0,>=12.0.1->sigopt->neural-compressor>=1.7->optimum[intel]) (3.1.1)
```
---
**Cell:**
```python
from datasets import load_dataset, load_metric
```
OR
```python
import datasets
```
**Traceback:**
```
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-7-34fb7ba3338d> in <module>
----> 1 from datasets import load_dataset, load_metric
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/__init__.py in <module>
32 )
33
---> 34 from .arrow_dataset import Dataset, concatenate_datasets
35 from .arrow_reader import ArrowReader, ReadInstruction
36 from .arrow_writer import ArrowWriter
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/arrow_dataset.py in <module>
59 from . import config, utils
60 from .arrow_reader import ArrowReader
---> 61 from .arrow_writer import ArrowWriter, OptimizedTypedSequence
62 from .features import ClassLabel, Features, FeatureType, Sequence, Value, _ArrayXD, pandas_types_mapper
63 from .filesystems import extract_path_from_uri, is_remote_filesystem
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/arrow_writer.py in <module>
26
27 from . import config, utils
---> 28 from .features import (
29 Features,
30 ImageExtensionType,
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/features/__init__.py in <module>
1 # flake8: noqa
----> 2 from .audio import Audio
3 from .features import *
4 from .features import (
5 _ArrayXD,
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/features/audio.py in <module>
5 import pyarrow as pa
6
----> 7 from ..utils.streaming_download_manager import xopen
8
9
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/utils/streaming_download_manager.py in <module>
16
17 from .. import config
---> 18 from ..filesystems import COMPRESSION_FILESYSTEMS
19 from .download_manager import DownloadConfig, map_nested
20 from .file_utils import (
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/filesystems/__init__.py in <module>
11
12 if _has_s3fs:
---> 13 from .s3filesystem import S3FileSystem # noqa: F401
14
15 COMPRESSION_FILESYSTEMS: List[compression.BaseCompressedFileFileSystem] = [
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/filesystems/s3filesystem.py in <module>
----> 1 import s3fs
2
3
4 class S3FileSystem(s3fs.S3FileSystem):
5 """
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/s3fs/__init__.py in <module>
----> 1 from .core import S3FileSystem, S3File
2 from .mapping import S3Map
3
4 from ._version import get_versions
5
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/s3fs/core.py in <module>
12 from fsspec.asyn import AsyncFileSystem, sync, sync_wrapper
13
---> 14 import aiobotocore
15 import botocore
16 import aiobotocore.session
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/aiobotocore/__init__.py in <module>
----> 1 from .session import get_session, AioSession
2
3 __all__ = ['get_session', 'AioSession']
4 __version__ = '1.3.0'
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/aiobotocore/session.py in <module>
4 from botocore import retryhandler, translate
5 from botocore.exceptions import PartialCredentialsError
----> 6 from .client import AioClientCreator, AioBaseClient
7 from .hooks import AioHierarchicalEmitter
8 from .parsers import AioResponseParserFactory
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/aiobotocore/client.py in <module>
11 from .args import AioClientArgsCreator
12 from .utils import AioS3RegionRedirector
---> 13 from . import waiter
14
15 history_recorder = get_global_history_recorder()
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/aiobotocore/waiter.py in <module>
4 from botocore.exceptions import ClientError
5 from botocore.waiter import WaiterModel # noqa: F401, lgtm[py/unused-import]
----> 6 from botocore.waiter import Waiter, xform_name, logger, WaiterError, \
7 NormalizedOperationMethod as _NormalizedOperationMethod, is_valid_waiter_error
8 from botocore.docs.docstring import WaiterDocstring
ImportError: cannot import name 'is_valid_waiter_error'
```
Please let me know if there's anything else I can add to post.
[1]: https://github.com/huggingface/notebooks/blob/master/examples/text_classification_quantization_inc.ipynb | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3554/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3554/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3553 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3553/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3553/comments | https://api.github.com/repos/huggingface/datasets/issues/3553/events | https://github.com/huggingface/datasets/issues/3553 | 1,097,252,275 | I_kwDODunzps5BZr2z | 3,553 | set_format("np") no longer works for Image data | {
"login": "cgarciae",
"id": 5862228,
"node_id": "MDQ6VXNlcjU4NjIyMjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5862228?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cgarciae",
"html_url": "https://github.com/cgarciae",
"followers_url": "https://api.github.com/users/cgarciae/followers",
"following_url": "https://api.github.com/users/cgarciae/following{/other_user}",
"gists_url": "https://api.github.com/users/cgarciae/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cgarciae/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cgarciae/subscriptions",
"organizations_url": "https://api.github.com/users/cgarciae/orgs",
"repos_url": "https://api.github.com/users/cgarciae/repos",
"events_url": "https://api.github.com/users/cgarciae/events{/privacy}",
"received_events_url": "https://api.github.com/users/cgarciae/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"A quick fix for now is doing this:\r\n\r\n```python\r\nX_train = np.stack(dataset[\"train\"][\"image\"])[..., None]",
"This error also propagates to jax and is even trickier to fix, since `.with_format(type='jax')` will use numpy conversion internally (and fail). For a three line failure:\r\n\r\n```python\r\ndataset = datasets.load_dataset(\"mnist\")\r\ndataset.set_format(\"jax\")\r\nX_train = dataset[\"train\"][\"image\"]\r\n```",
"Hi! We've recently introduced a new Image feature that yields PIL Images (and caches transforms on them) instead of arrays.\r\n\r\nHowever, this feature requires a custom transform to yield np arrays directly:\r\n```python\r\nddict = datasets.load_dataset(\"mnist\")\r\n\r\ndef pil_image_to_array(batch):\r\n return {\"image\": [np.array(img) for img in batch[\"image\"]]} # or jnp.array(img) for Jax\r\n\r\nddict.set_transform(pil_image_to_array, columns=\"image\", output_all_columns=True)\r\n```\r\n\r\n[Docs](https://huggingface.co/docs/datasets/master/process.html#format-transform) on `set_transform`.\r\n\r\nAlso, the approach proposed by @cgarciae is not the best because it loads the entire column in memory.\r\n\r\n@albertvillanova @lhoestq WDYT? The Audio and the Image feature currently don't support the TF/Jax/PT Formatters, but for the Numpy Formatter maybe it makes more sense to return np arrays (and not a dict in the case of the Audio feature or a PIL Image object in the case of the Image feature).",
"Yes I agree it should return arrays and not a PIL image (and possible an array instead of a dict for audio data).\r\nI'm currently finishing some code refactoring of the image and audio and opening a PR today. Maybe we can look into that after the refactoring"
] | 1,641,748,693,000 | 1,642,081,166,000 | null | NONE | null | null | null | ## Describe the bug
`dataset.set_format("np")` no longer works for image data, previously you could load the MNIST like this:
```python
dataset = load_dataset("mnist")
dataset.set_format("np")
X_train = dataset["train"]["image"][..., None] # <== No longer a numpy array
```
but now it doesn't work, `set_format("np")` seems to have no effect and the dataset just returns a list/array of PIL images instead of numpy arrays as requested.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3553/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3553/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3552 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3552/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3552/comments | https://api.github.com/repos/huggingface/datasets/issues/3552/events | https://github.com/huggingface/datasets/pull/3552 | 1,096,985,204 | PR_kwDODunzps4wsM29 | 3,552 | Add the KMWP & DKTC dataset. | {
"login": "sooftware",
"id": 42150335,
"node_id": "MDQ6VXNlcjQyMTUwMzM1",
"avatar_url": "https://avatars.githubusercontent.com/u/42150335?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sooftware",
"html_url": "https://github.com/sooftware",
"followers_url": "https://api.github.com/users/sooftware/followers",
"following_url": "https://api.github.com/users/sooftware/following{/other_user}",
"gists_url": "https://api.github.com/users/sooftware/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sooftware/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sooftware/subscriptions",
"organizations_url": "https://api.github.com/users/sooftware/orgs",
"repos_url": "https://api.github.com/users/sooftware/repos",
"events_url": "https://api.github.com/users/sooftware/events{/privacy}",
"received_events_url": "https://api.github.com/users/sooftware/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,641,661,934,000 | 1,641,910,410,000 | 1,641,910,410,000 | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3552",
"html_url": "https://github.com/huggingface/datasets/pull/3552",
"diff_url": "https://github.com/huggingface/datasets/pull/3552.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3552.patch",
"merged_at": null
} | Add the KMWP & DKTC dataset.
Additional notes:
- Both datasets will be released on January 10 through the GitHub link below.
- https://github.com/tunib-ai/DKTC
- https://github.com/tunib-ai/KMWP
- So it doesn't work as a link at the moment, but the code will work soon (after it is released on January 10). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3552/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3552/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3551 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3551/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3551/comments | https://api.github.com/repos/huggingface/datasets/issues/3551/events | https://github.com/huggingface/datasets/pull/3551 | 1,096,561,111 | PR_kwDODunzps4wq_AO | 3,551 | Add more compression types for `to_json` | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"@lhoestq, I looked into how to compress with `zipfile` for which few methods exist, let me know which one looks good:\r\n1. create the file in normal `wb` mode and then zip it separately\r\n2. use `ZipFile.write_str` to write file into the archive. For this we'll need to change how we're writing files from `_write` method \r\n\r\nHow `pandas` handles it is that they have created a wrapper for standard library class `ZipFile` and allow the returned file-like handle to accept byte strings via `write` method instead of `write_str` (purpose was to change the name of function by creating that wrapper)",
"1. sounds not ideal since it creates an intermediary file.\r\nI like pandas' approach. Is it possible to implement 2. using the pandas class ? Or maybe we can have something similar ?"
] | 1,641,579,902,000 | 1,642,168,983,000 | null | CONTRIBUTOR | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3551",
"html_url": "https://github.com/huggingface/datasets/pull/3551",
"diff_url": "https://github.com/huggingface/datasets/pull/3551.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3551.patch",
"merged_at": null
} | This PR adds `bz2`, `xz`, and `zip` (WIP) for `to_json`. I also plan to add `infer` like how `pandas` does it | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3551/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3551/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3550 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3550/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3550/comments | https://api.github.com/repos/huggingface/datasets/issues/3550/events | https://github.com/huggingface/datasets/issues/3550 | 1,096,522,377 | I_kwDODunzps5BW5qJ | 3,550 | Bug in `openbookqa` dataset | {
"login": "lucadiliello",
"id": 23355969,
"node_id": "MDQ6VXNlcjIzMzU1OTY5",
"avatar_url": "https://avatars.githubusercontent.com/u/23355969?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lucadiliello",
"html_url": "https://github.com/lucadiliello",
"followers_url": "https://api.github.com/users/lucadiliello/followers",
"following_url": "https://api.github.com/users/lucadiliello/following{/other_user}",
"gists_url": "https://api.github.com/users/lucadiliello/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lucadiliello/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lucadiliello/subscriptions",
"organizations_url": "https://api.github.com/users/lucadiliello/orgs",
"repos_url": "https://api.github.com/users/lucadiliello/repos",
"events_url": "https://api.github.com/users/lucadiliello/events{/privacy}",
"received_events_url": "https://api.github.com/users/lucadiliello/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | open | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,641,576,777,000 | 1,642,425,393,000 | null | CONTRIBUTOR | null | null | null | ## Describe the bug
Dataset entries contains a typo.
## Steps to reproduce the bug
```python
>>> from datasets import load_dataset
>>> obqa = load_dataset('openbookqa', 'main')
>>> obqa['train'][0]
```
## Expected results
```python
{'id': '7-980', 'question_stem': 'The sun is responsible for', 'choices': {'text': ['puppies learning new tricks', 'children growing up and getting old', 'flowers wilting in a vase', 'plants sprouting, blooming and wilting'], 'label': ['A', 'B', 'C', 'D']}, 'answerKey': 'D'}
```
## Actual results
```python
{'id': '7-980', 'question_stem': 'The sun is responsible for', 'choices': {'text': ['puppies learning new tricks', 'children growing up and getting old', 'flowers wilting in a vase', 'plants sprouting, blooming and wilting'], 'label': ['puppies learning new tricks', 'children growing up and getting old', 'flowers wilting in a vase', 'plants sprouting, blooming and wilting']}, 'answerKey': 'D'}
```
The bug is present in all configs and all splits.
## Environment info
- `datasets` version: 1.17.0
- Platform: Linux-5.4.0-1057-aws-x86_64-with-glibc2.27
- Python version: 3.9.7
- PyArrow version: 4.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3550/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3550/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3549 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3549/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3549/comments | https://api.github.com/repos/huggingface/datasets/issues/3549/events | https://github.com/huggingface/datasets/pull/3549 | 1,096,426,996 | PR_kwDODunzps4wqkGt | 3,549 | Fix sem_eval_2018_task_1 download location | {
"login": "maxpel",
"id": 31095360,
"node_id": "MDQ6VXNlcjMxMDk1MzYw",
"avatar_url": "https://avatars.githubusercontent.com/u/31095360?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maxpel",
"html_url": "https://github.com/maxpel",
"followers_url": "https://api.github.com/users/maxpel/followers",
"following_url": "https://api.github.com/users/maxpel/following{/other_user}",
"gists_url": "https://api.github.com/users/maxpel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maxpel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maxpel/subscriptions",
"organizations_url": "https://api.github.com/users/maxpel/orgs",
"repos_url": "https://api.github.com/users/maxpel/repos",
"events_url": "https://api.github.com/users/maxpel/events{/privacy}",
"received_events_url": "https://api.github.com/users/maxpel/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,641,569,872,000 | 1,641,569,872,000 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3549",
"html_url": "https://github.com/huggingface/datasets/pull/3549",
"diff_url": "https://github.com/huggingface/datasets/pull/3549.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3549.patch",
"merged_at": null
} | This changes the download location of sem_eval_2018_task_1 files to include the test set labels as discussed in https://github.com/huggingface/datasets/issues/2745#issuecomment-954588500_ with @lhoestq. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3549/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3549/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3548 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3548/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3548/comments | https://api.github.com/repos/huggingface/datasets/issues/3548/events | https://github.com/huggingface/datasets/issues/3548 | 1,096,409,512 | I_kwDODunzps5BWeGo | 3,548 | Specify the feature types of a dataset on the Hub without needing a dataset script | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "abidlabs",
"id": 1778297,
"node_id": "MDQ6VXNlcjE3NzgyOTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1778297?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abidlabs",
"html_url": "https://github.com/abidlabs",
"followers_url": "https://api.github.com/users/abidlabs/followers",
"following_url": "https://api.github.com/users/abidlabs/following{/other_user}",
"gists_url": "https://api.github.com/users/abidlabs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abidlabs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abidlabs/subscriptions",
"organizations_url": "https://api.github.com/users/abidlabs/orgs",
"repos_url": "https://api.github.com/users/abidlabs/repos",
"events_url": "https://api.github.com/users/abidlabs/events{/privacy}",
"received_events_url": "https://api.github.com/users/abidlabs/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "abidlabs",
"id": 1778297,
"node_id": "MDQ6VXNlcjE3NzgyOTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1778297?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abidlabs",
"html_url": "https://github.com/abidlabs",
"followers_url": "https://api.github.com/users/abidlabs/followers",
"following_url": "https://api.github.com/users/abidlabs/following{/other_user}",
"gists_url": "https://api.github.com/users/abidlabs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abidlabs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abidlabs/subscriptions",
"organizations_url": "https://api.github.com/users/abidlabs/orgs",
"repos_url": "https://api.github.com/users/abidlabs/repos",
"events_url": "https://api.github.com/users/abidlabs/events{/privacy}",
"received_events_url": "https://api.github.com/users/abidlabs/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,641,568,626,000 | 1,641,568,677,000 | null | MEMBER | null | null | null | **Is your feature request related to a problem? Please describe.**
Currently if I upload a CSV with paths to audio files, the column type is string instead of Audio.
**Describe the solution you'd like**
I'd like to be able to specify the types of the column, so that when loading the dataset I directly get the features types I want.
The feature types could read from the `dataset_infos.json` for example.
**Describe alternatives you've considered**
Create a dataset script to specify the features, but that seems complicated for a simple thing.
cc @abidlabs | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3548/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3548/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3547 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3547/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3547/comments | https://api.github.com/repos/huggingface/datasets/issues/3547/events | https://github.com/huggingface/datasets/issues/3547 | 1,096,405,515 | I_kwDODunzps5BWdIL | 3,547 | Datasets created with `push_to_hub` can't be accessed in offline mode | {
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting. I think this can be fixed by improving the `CachedDatasetModuleFactory` and making it look into the `parquet` cache directory (datasets from push_to_hub are loaded with the parquet dataset builder). I'll look into it"
] | 1,641,568,345,000 | 1,641,811,484,000 | null | MEMBER | null | null | null | ## Describe the bug
In offline mode, one can still access previously-cached datasets. This fails with datasets created with `push_to_hub`.
## Steps to reproduce the bug
in Python:
```
import datasets
mpwiki = datasets.load_dataset("teven/matched_passages_wikidata")
```
in bash:
```
export HF_DATASETS_OFFLINE=1
```
in Python:
```
import datasets
mpwiki = datasets.load_dataset("teven/matched_passages_wikidata")
```
## Expected results
`datasets` should find the previously-cached dataset.
## Actual results
ConnectionError: Couln't reach the Hugging Face Hub for dataset 'teven/matched_passages_wikidata': Offline mode is enabled
## Environment info
- `datasets` version: 1.16.2.dev0
- Platform: Linux-4.18.0-193.70.1.el8_2.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.10
- PyArrow version: 3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3547/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3547/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3546 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3546/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3546/comments | https://api.github.com/repos/huggingface/datasets/issues/3546/events | https://github.com/huggingface/datasets/pull/3546 | 1,096,367,684 | PR_kwDODunzps4wqYIV | 3,546 | Remove print statements in datasets | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The CI failures are unrelated to the changes."
] | 1,641,565,824,000 | 1,641,578,956,000 | 1,641,578,955,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3546",
"html_url": "https://github.com/huggingface/datasets/pull/3546",
"diff_url": "https://github.com/huggingface/datasets/pull/3546.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3546.patch",
"merged_at": 1641578955000
} | This is a second time I'm removing print statements in our datasets, so I've added a test to avoid these issues in the future. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3546/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3546/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3545 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3545/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3545/comments | https://api.github.com/repos/huggingface/datasets/issues/3545/events | https://github.com/huggingface/datasets/pull/3545 | 1,096,189,889 | PR_kwDODunzps4wpziv | 3,545 | fix: 🐛 pass token when retrieving the split names | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Currently, it does not work with https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0/blob/main/common_voice_7_0.py#L146 (which was the goal), because `dl_manager.download_config.use_auth_token` is ignored, and the authentication is required to be use `huggingface-cli login`.\r\nIn my use case (dataset viewer), I'd prefer to use a specific \"User Token Access\", with only the \"read\" role (https://huggingface.co/settings/token).\r\n\r\nSee https://github.com/huggingface/datasets-preview-backend/issues/74#issuecomment-1007316853 for the context",
"> Simply passing download_config is ok :)\r\n\r\nhmm, I prefer only passing use_auth_token. But the question is more: is it correct, in the (convoluted) case if `download_config.use_auth_token` exists and is different from `use_auth_token`? Which one should be used?",
"If both are passed, `use_auth_token` should have the priority (more specific parameters have the higher priority)"
] | 1,641,551,362,000 | 1,641,811,907,000 | 1,641,811,906,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3545",
"html_url": "https://github.com/huggingface/datasets/pull/3545",
"diff_url": "https://github.com/huggingface/datasets/pull/3545.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3545.patch",
"merged_at": 1641811906000
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3545/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3545/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3544 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3544/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3544/comments | https://api.github.com/repos/huggingface/datasets/issues/3544/events | https://github.com/huggingface/datasets/issues/3544 | 1,095,784,681 | I_kwDODunzps5BUFjp | 3,544 | Ability to split a dataset in multiple files. | {
"login": "Dref360",
"id": 8976546,
"node_id": "MDQ6VXNlcjg5NzY1NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dref360",
"html_url": "https://github.com/Dref360",
"followers_url": "https://api.github.com/users/Dref360/followers",
"following_url": "https://api.github.com/users/Dref360/following{/other_user}",
"gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dref360/subscriptions",
"organizations_url": "https://api.github.com/users/Dref360/orgs",
"repos_url": "https://api.github.com/users/Dref360/repos",
"events_url": "https://api.github.com/users/Dref360/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dref360/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 1,641,510,145,000 | 1,641,510,145,000 | null | CONTRIBUTOR | null | null | null | Hello,
**Is your feature request related to a problem? Please describe.**
My use case is that I have one writer that adds columns and multiple workers reading the same `Dataset`. Each worker should have access to columns added by the writer when they reload the dataset.
I understand that we shouldn't overwrite an arrow file as this could cause Segfault and so on. Before 1.16, I was able to overwrite the dataset and that would work most of the time with some retries.
**Describe the solution you'd like**
I was thinking that if we could append `Dataset._data_files`, when the workers reload the Dataset, they would get the new columns.
**Describe alternatives you've considered**
I currently need to
1. Save multiple "versions" of the dataset and load the latest.
2. Try working with cache files to get the latest columns.
**Additional context**
I think this would be a great addition to HFDataset as Parquet supports multi-files input out of the box!
I can make a PR myself with some pointers as needed :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3544/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3544/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3543 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3543/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3543/comments | https://api.github.com/repos/huggingface/datasets/issues/3543/events | https://github.com/huggingface/datasets/issues/3543 | 1,095,226,438 | I_kwDODunzps5BR9RG | 3,543 | Allow loading community metrics from the hub, just like datasets | {
"login": "eladsegal",
"id": 13485709,
"node_id": "MDQ6VXNlcjEzNDg1NzA5",
"avatar_url": "https://avatars.githubusercontent.com/u/13485709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eladsegal",
"html_url": "https://github.com/eladsegal",
"followers_url": "https://api.github.com/users/eladsegal/followers",
"following_url": "https://api.github.com/users/eladsegal/following{/other_user}",
"gists_url": "https://api.github.com/users/eladsegal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eladsegal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eladsegal/subscriptions",
"organizations_url": "https://api.github.com/users/eladsegal/orgs",
"repos_url": "https://api.github.com/users/eladsegal/repos",
"events_url": "https://api.github.com/users/eladsegal/events{/privacy}",
"received_events_url": "https://api.github.com/users/eladsegal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 2067400324,
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion",
"name": "generic discussion",
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library"
}
] | open | false | null | [] | null | [
"Hi ! Thanks for your message :) This is a great idea indeed. We haven't started working on this yet though. For now I guess you can host your metric on the Hub (either with your model or your dataset) and use `hf_hub_download` to download it (docs [here](https://github.com/huggingface/huggingface_hub/blob/main/docs/hub/how-to-downstream.md#cached_download))",
"This is a great solution in the meantime, thanks!",
"Here's the code I used, in case it can be of help to someone else:\r\n```python\r\nimport os, shutil\r\nfrom huggingface_hub import hf_hub_download\r\ndef download_metric(repo_id, file_path):\r\n # repo_id: for models \"username/model_name\", for datasets \"datasets/username/model_name\"\r\n local_metric_path = hf_hub_download(repo_id=repo_id, filename=file_path)\r\n updated_local_metric_path = (os.path.dirname(local_metric_path) + os.path.basename(local_metric_path).replace(\".\", \"_\") + \".py\")\r\n shutil.copy(local_metric_path, updated_local_metric_path)\r\n return updated_local_metric_path\r\n\r\nmetric = load_metric(download_metric(REPO_ID, FILE_PATH))\r\n```"
] | 1,641,468,386,000 | 1,641,760,093,000 | null | CONTRIBUTOR | null | null | null | **Is your feature request related to a problem? Please describe.**
Currently, I can load a metric implemented by me by providing the local path to the file in `load_metric`.
However, there is no option to do it with the metric uploaded to the hub.
This means that if I want to allow other users to use it, they must download it first which makes the usage less smooth.
**Describe the solution you'd like**
Load metrics from the hub just like datasets are loaded.
In order to not break stuff, the convention can be to put the metric file in a "metrics" folder in the hub.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3543/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3543/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3542 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3542/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3542/comments | https://api.github.com/repos/huggingface/datasets/issues/3542/events | https://github.com/huggingface/datasets/pull/3542 | 1,095,088,485 | PR_kwDODunzps4wmPIP | 3,542 | Update the CC-100 dataset card | {
"login": "aajanki",
"id": 353043,
"node_id": "MDQ6VXNlcjM1MzA0Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/353043?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aajanki",
"html_url": "https://github.com/aajanki",
"followers_url": "https://api.github.com/users/aajanki/followers",
"following_url": "https://api.github.com/users/aajanki/following{/other_user}",
"gists_url": "https://api.github.com/users/aajanki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aajanki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aajanki/subscriptions",
"organizations_url": "https://api.github.com/users/aajanki/orgs",
"repos_url": "https://api.github.com/users/aajanki/repos",
"events_url": "https://api.github.com/users/aajanki/events{/privacy}",
"received_events_url": "https://api.github.com/users/aajanki/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,641,458,118,000 | 1,641,494,264,000 | 1,641,494,264,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3542",
"html_url": "https://github.com/huggingface/datasets/pull/3542",
"diff_url": "https://github.com/huggingface/datasets/pull/3542.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3542.patch",
"merged_at": 1641494264000
} | * summary from the dataset homepage
* more details about the data structure
* this dataset does not contain annotations | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3542/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3542/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3541 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3541/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3541/comments | https://api.github.com/repos/huggingface/datasets/issues/3541/events | https://github.com/huggingface/datasets/issues/3541 | 1,095,033,828 | I_kwDODunzps5BROPk | 3,541 | Support 7-zip compressed data files | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 1,641,453,063,000 | 1,641,453,063,000 | null | MEMBER | null | null | null | **Is your feature request related to a problem? Please describe.**
We should support 7-zip compressed data files:
- in `extract`
- in `iter_archive`
both in streaming and non-streaming modes.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3541/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3541/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3540 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3540/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3540/comments | https://api.github.com/repos/huggingface/datasets/issues/3540/events | https://github.com/huggingface/datasets/issues/3540 | 1,094,900,336 | I_kwDODunzps5BQtpw | 3,540 | How to convert torch.utils.data.Dataset to datasets.arrow_dataset.Dataset? | {
"login": "CindyTing",
"id": 35062414,
"node_id": "MDQ6VXNlcjM1MDYyNDE0",
"avatar_url": "https://avatars.githubusercontent.com/u/35062414?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CindyTing",
"html_url": "https://github.com/CindyTing",
"followers_url": "https://api.github.com/users/CindyTing/followers",
"following_url": "https://api.github.com/users/CindyTing/following{/other_user}",
"gists_url": "https://api.github.com/users/CindyTing/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CindyTing/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CindyTing/subscriptions",
"organizations_url": "https://api.github.com/users/CindyTing/orgs",
"repos_url": "https://api.github.com/users/CindyTing/repos",
"events_url": "https://api.github.com/users/CindyTing/events{/privacy}",
"received_events_url": "https://api.github.com/users/CindyTing/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 1,641,435,222,000 | 1,641,435,459,000 | null | NONE | null | null | null | Hi,
I use torch.utils.data.Dataset to define my own data, but I need to use the 'map' function of datasets.arrow_dataset.Dataset later, so I hope to convert torch.utils.data.Dataset to datasets.arrow_dataset.Dataset.
Here is an example.
```
from torch.utils.data import Dataset
from datasets.arrow_dataset import Dataset as HFDataset
class ADataset(Dataset):
def __init__(self, data):
super().__init__()
self.data = data
def __getitem__(self, index):
return self.data[index]
def __len__(self):
return self.len
class MDataset():
def __init__(self, tokenizer: AutoTokenizer, data_args, training_args):
self.train_dataset = ADataset(data_args)
self.tokenizer = tokenizer
self.data_args = data_args
self.train_dataset = self.train_dataset.map(
self.process_function,
batched=True,
remove_columns=column_names,
load_from_cache_file=True,
desc="Running tokenizer on train dataset",
)
def process_function(self, examples):
sentences = [" ".join(sample[0][3]) for sample in examples]
tokenized = self.tokenizer(
sentences,
max_length=self.max_seq_len,
padding=self.padding,
truncation=True)
```
But it would raise an ERROR, AttributeError: 'ADataset' object has no attribute 'map'.
so how to convert torch.utils.data.Dataset to datasets.arrow_dataset.Dataset?
Thanks in advance!
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3540/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3540/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3539 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3539/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3539/comments | https://api.github.com/repos/huggingface/datasets/issues/3539/events | https://github.com/huggingface/datasets/pull/3539 | 1,094,813,242 | PR_kwDODunzps4wlXU4 | 3,539 | Research wording for nc licenses | {
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"The CI failure is about some missing tags or sections in the dataset cards, and is unrelated to the part about non commercial use of this PR. Merging"
] | 1,641,423,698,000 | 1,641,495,500,000 | 1,641,495,499,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3539",
"html_url": "https://github.com/huggingface/datasets/pull/3539",
"diff_url": "https://github.com/huggingface/datasets/pull/3539.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3539.patch",
"merged_at": 1641495499000
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3539/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3539/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3538 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3538/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3538/comments | https://api.github.com/repos/huggingface/datasets/issues/3538/events | https://github.com/huggingface/datasets/pull/3538 | 1,094,756,755 | PR_kwDODunzps4wlLmD | 3,538 | Readme usage update | {
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,641,417,988,000 | 1,641,425,665,000 | 1,641,425,055,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3538",
"html_url": "https://github.com/huggingface/datasets/pull/3538",
"diff_url": "https://github.com/huggingface/datasets/pull/3538.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3538.patch",
"merged_at": 1641425055000
} | Noticing that the recent commit throws a lot of errors in the automatic checks. It looks to me that those errors are simply errors that were already there (metadata issues), unrelated to what I've just changed, but worth another look to make sure. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3538/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3538/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3537 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3537/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3537/comments | https://api.github.com/repos/huggingface/datasets/issues/3537/events | https://github.com/huggingface/datasets/pull/3537 | 1,094,738,734 | PR_kwDODunzps4wlH1d | 3,537 | added PII statements and license links to data cards | {
"login": "mcmillanmajora",
"id": 26722925,
"node_id": "MDQ6VXNlcjI2NzIyOTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mcmillanmajora",
"html_url": "https://github.com/mcmillanmajora",
"followers_url": "https://api.github.com/users/mcmillanmajora/followers",
"following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}",
"gists_url": "https://api.github.com/users/mcmillanmajora/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mcmillanmajora/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mcmillanmajora/subscriptions",
"organizations_url": "https://api.github.com/users/mcmillanmajora/orgs",
"repos_url": "https://api.github.com/users/mcmillanmajora/repos",
"events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}",
"received_events_url": "https://api.github.com/users/mcmillanmajora/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,641,416,361,000 | 1,641,420,157,000 | 1,641,420,157,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3537",
"html_url": "https://github.com/huggingface/datasets/pull/3537",
"diff_url": "https://github.com/huggingface/datasets/pull/3537.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3537.patch",
"merged_at": 1641420157000
} | Updates for the following datacards:
multilingual_librispeech
openslr
speech commands
superb
timit_asr
vctk | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3537/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3537/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3536 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3536/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3536/comments | https://api.github.com/repos/huggingface/datasets/issues/3536/events | https://github.com/huggingface/datasets/pull/3536 | 1,094,645,771 | PR_kwDODunzps4wk0Yb | 3,536 | update `pretty_name` for all datasets | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Pushed the lastest changes!"
] | 1,641,408,305,000 | 1,642,028,386,000 | 1,642,028,385,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3536",
"html_url": "https://github.com/huggingface/datasets/pull/3536",
"diff_url": "https://github.com/huggingface/datasets/pull/3536.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3536.patch",
"merged_at": 1642028385000
} | This PR updates `pretty_name` for all datasets. Previous PR #3498 had done this for only first 200 datasets | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3536/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3536/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3535 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3535/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3535/comments | https://api.github.com/repos/huggingface/datasets/issues/3535/events | https://github.com/huggingface/datasets/pull/3535 | 1,094,633,214 | PR_kwDODunzps4wkxv0 | 3,535 | Add SVHN dataset | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,641,407,349,000 | 1,641,996,875,000 | 1,641,996,875,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3535",
"html_url": "https://github.com/huggingface/datasets/pull/3535",
"diff_url": "https://github.com/huggingface/datasets/pull/3535.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3535.patch",
"merged_at": 1641996875000
} | Add the SVHN dataset.
Additional notes:
* compared to the TFDS implementation, exposes additional the "full numbers" config
* adds the streaming support for `os.path.splitext` and `scipy.io.loadmat`
* adds `h5py` to the requirements list for the dummy data test | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3535/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3535/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3534 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3534/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3534/comments | https://api.github.com/repos/huggingface/datasets/issues/3534/events | https://github.com/huggingface/datasets/pull/3534 | 1,094,352,449 | PR_kwDODunzps4wj3LE | 3,534 | Update wiki_dpr README.md | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,641,389,384,000 | 1,641,392,212,000 | 1,641,392,211,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3534",
"html_url": "https://github.com/huggingface/datasets/pull/3534",
"diff_url": "https://github.com/huggingface/datasets/pull/3534.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3534.patch",
"merged_at": 1641392211000
} | Some infos of wiki_dpr were missing as noted in https://github.com/huggingface/datasets/issues/3510, I added them and updated the tags and the examples | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3534/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3534/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3533 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3533/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3533/comments | https://api.github.com/repos/huggingface/datasets/issues/3533/events | https://github.com/huggingface/datasets/issues/3533 | 1,094,156,147 | I_kwDODunzps5BN39z | 3,533 | Task search function on hub not working correctly | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
},
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"known issue due to https://github.com/huggingface/datasets/pull/2362 (and [internal](https://github.com/huggingface/moon-landing/issues/946)) , will be solved soon"
] | 1,641,375,390,000 | 1,641,376,988,000 | null | MEMBER | null | null | null | When I want to look at all datasets of the category: `speech-processing` *i.e.* https://huggingface.co/datasets?task_categories=task_categories:speech-processing&sort=downloads , then the following dataset doesn't show up for some reason:
- https://huggingface.co/datasets/speech_commands
even thought it's task tags seem correct:
https://raw.githubusercontent.com/huggingface/datasets/master/datasets/speech_commands/README.md | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3533/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3533/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3532 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3532/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3532/comments | https://api.github.com/repos/huggingface/datasets/issues/3532/events | https://github.com/huggingface/datasets/pull/3532 | 1,094,035,066 | PR_kwDODunzps4wi1ft | 3,532 | Give clearer instructions to add the YAML tags | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"this is great, maybe just put all of it in one line?\r\n\r\n> TODO: Add YAML tags here. Copy-paste the tags obtained with the online tagging app: https://huggingface.co/spaces/huggingface/datasets-tagging"
] | 1,641,365,272,000 | 1,642,434,877,000 | 1,642,434,876,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3532",
"html_url": "https://github.com/huggingface/datasets/pull/3532",
"diff_url": "https://github.com/huggingface/datasets/pull/3532.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3532.patch",
"merged_at": 1642434876000
} | Fix #3531.
CC: @julien-c @VictorSanh | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3532/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3532/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3531 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3531/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3531/comments | https://api.github.com/repos/huggingface/datasets/issues/3531/events | https://github.com/huggingface/datasets/issues/3531 | 1,094,033,280 | I_kwDODunzps5BNZ-A | 3,531 | Give clearer instructions to add the YAML tags | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,641,365,060,000 | 1,642,434,876,000 | 1,642,434,876,000 | MEMBER | null | null | null | ## Describe the bug
As reported by @julien-c, many community datasets contain the line `YAML tags:` at the top of the YAML section in the header of the README file. See e.g.: https://huggingface.co/datasets/bigscience/P3/commit/a03bea08cf4d58f268b469593069af6aeb15de32
Maybe we should give clearer instruction/hints in the README template.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3531/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3531/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3530 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3530/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3530/comments | https://api.github.com/repos/huggingface/datasets/issues/3530/events | https://github.com/huggingface/datasets/pull/3530 | 1,093,894,732 | PR_kwDODunzps4wiZCw | 3,530 | Update README.md | {
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,641,346,327,000 | 1,641,387,051,000 | 1,641,387,050,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3530",
"html_url": "https://github.com/huggingface/datasets/pull/3530",
"diff_url": "https://github.com/huggingface/datasets/pull/3530.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3530.patch",
"merged_at": 1641387050000
} | Removing reference to "Common Voice" in Personal and Sensitive Information section.
Adding link to license.
Correct license type in metadata. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3530/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3530/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3529 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3529/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3529/comments | https://api.github.com/repos/huggingface/datasets/issues/3529/events | https://github.com/huggingface/datasets/pull/3529 | 1,093,846,356 | PR_kwDODunzps4wiPA9 | 3,529 | Update README.md | {
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,641,340,367,000 | 1,641,387,015,000 | 1,641,387,014,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3529",
"html_url": "https://github.com/huggingface/datasets/pull/3529",
"diff_url": "https://github.com/huggingface/datasets/pull/3529.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3529.patch",
"merged_at": 1641387014000
} | Updating licensing information & personal and sensitive information. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3529/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3529/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3528 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3528/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3528/comments | https://api.github.com/repos/huggingface/datasets/issues/3528/events | https://github.com/huggingface/datasets/pull/3528 | 1,093,844,616 | PR_kwDODunzps4wiOqH | 3,528 | Update README.md | {
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,641,340,091,000 | 1,641,386,981,000 | 1,641,386,980,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3528",
"html_url": "https://github.com/huggingface/datasets/pull/3528",
"diff_url": "https://github.com/huggingface/datasets/pull/3528.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3528.patch",
"merged_at": 1641386980000
} | Updating license with appropriate capitalization & a link.
Updating Personal and Sensitive Information to address PII concern. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3528/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3528/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3527 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3527/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3527/comments | https://api.github.com/repos/huggingface/datasets/issues/3527/events | https://github.com/huggingface/datasets/pull/3527 | 1,093,840,707 | PR_kwDODunzps4wiN1w | 3,527 | Update README.md | {
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,641,339,581,000 | 1,641,342,230,000 | 1,641,342,230,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3527",
"html_url": "https://github.com/huggingface/datasets/pull/3527",
"diff_url": "https://github.com/huggingface/datasets/pull/3527.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3527.patch",
"merged_at": 1641342230000
} | Adding licensing information. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3527/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3527/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3526 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3526/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3526/comments | https://api.github.com/repos/huggingface/datasets/issues/3526/events | https://github.com/huggingface/datasets/pull/3526 | 1,093,833,446 | PR_kwDODunzps4wiMaQ | 3,526 | Update README.md | {
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,641,338,723,000 | 1,641,339,008,000 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3526",
"html_url": "https://github.com/huggingface/datasets/pull/3526",
"diff_url": "https://github.com/huggingface/datasets/pull/3526.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3526.patch",
"merged_at": null
} | Not entirely sure, following the links here, but it seems the relevant license is at https://github.com/soskek/bookcorpus/blob/master/LICENSE | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3526/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3526/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3525 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3525/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3525/comments | https://api.github.com/repos/huggingface/datasets/issues/3525/events | https://github.com/huggingface/datasets/pull/3525 | 1,093,831,268 | PR_kwDODunzps4wiL8p | 3,525 | Adding license information for Openbookcorpus | {
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks, @meg-huggingface for the updates!\r\n\r\nThanks also for setting me as a reviewer, but I'm totally not a specialist of the datasets themselves, so I prefer to just lurk and let @lhoestq @albertvillanova @mariosasko or @patrickvonplaten review these changes (https://github.com/huggingface/datasets/pulls/assigned/meg-huggingface).",
"The MIT license seems to be for the crawling code, no ? Then maybe we can also redirect users to the [terms of smashwords.com](https://www.smashwords.com/about/tos) regarding copyrights, in particular the paragraph 10 for end-users. In particular it seems that end users can download and use the content \"for their personal enjoyment in any reasonable non-commercial manner in compliance with copyright law\" and the smashwords end-users agreement.\r\n\r\nIt should be the same for https://github.com/huggingface/datasets/pull/3526 as well"
] | 1,641,338,436,000 | 1,641,386,918,000 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3525",
"html_url": "https://github.com/huggingface/datasets/pull/3525",
"diff_url": "https://github.com/huggingface/datasets/pull/3525.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3525.patch",
"merged_at": null
} | Not entirely sure, following the links here, but it seems the relevant license is at https://github.com/soskek/bookcorpus/blob/master/LICENSE | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3525/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3525/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3524 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3524/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3524/comments | https://api.github.com/repos/huggingface/datasets/issues/3524/events | https://github.com/huggingface/datasets/pull/3524 | 1,093,826,723 | PR_kwDODunzps4wiK_v | 3,524 | Adding link to license. | {
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,641,337,908,000 | 1,641,385,898,000 | 1,641,385,897,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3524",
"html_url": "https://github.com/huggingface/datasets/pull/3524",
"diff_url": "https://github.com/huggingface/datasets/pull/3524.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3524.patch",
"merged_at": 1641385897000
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3524/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3524/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3523 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3523/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3523/comments | https://api.github.com/repos/huggingface/datasets/issues/3523/events | https://github.com/huggingface/datasets/pull/3523 | 1,093,819,227 | PR_kwDODunzps4wiJc2 | 3,523 | Added links to licensing and PII message in vctk dataset | {
"login": "mcmillanmajora",
"id": 26722925,
"node_id": "MDQ6VXNlcjI2NzIyOTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mcmillanmajora",
"html_url": "https://github.com/mcmillanmajora",
"followers_url": "https://api.github.com/users/mcmillanmajora/followers",
"following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}",
"gists_url": "https://api.github.com/users/mcmillanmajora/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mcmillanmajora/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mcmillanmajora/subscriptions",
"organizations_url": "https://api.github.com/users/mcmillanmajora/orgs",
"repos_url": "https://api.github.com/users/mcmillanmajora/repos",
"events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}",
"received_events_url": "https://api.github.com/users/mcmillanmajora/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,641,337,018,000 | 1,641,497,630,000 | 1,641,497,630,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3523",
"html_url": "https://github.com/huggingface/datasets/pull/3523",
"diff_url": "https://github.com/huggingface/datasets/pull/3523.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3523.patch",
"merged_at": 1641497630000
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3523/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3523/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3522 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3522/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3522/comments | https://api.github.com/repos/huggingface/datasets/issues/3522/events | https://github.com/huggingface/datasets/issues/3522 | 1,093,807,586 | I_kwDODunzps5BMi3i | 3,522 | wmt19 is broken (zh-en) | {
"login": "AjayP13",
"id": 5404177,
"node_id": "MDQ6VXNlcjU0MDQxNzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5404177?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AjayP13",
"html_url": "https://github.com/AjayP13",
"followers_url": "https://api.github.com/users/AjayP13/followers",
"following_url": "https://api.github.com/users/AjayP13/following{/other_user}",
"gists_url": "https://api.github.com/users/AjayP13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AjayP13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AjayP13/subscriptions",
"organizations_url": "https://api.github.com/users/AjayP13/orgs",
"repos_url": "https://api.github.com/users/AjayP13/repos",
"events_url": "https://api.github.com/users/AjayP13/events{/privacy}",
"received_events_url": "https://api.github.com/users/AjayP13/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | open | false | null | [] | null | [] | 1,641,335,625,000 | 1,642,425,415,000 | null | NONE | null | null | null | ## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("wmt19", 'zh-en')
```
## Expected results
The dataset should download.
## Actual results
`ConnectionError: Couldn't reach ftp://cwmt-wmt:cwmt-wmt@datasets.nju.edu.cn/parallel/casia2015.zip`
## Environment info
- `datasets` version: 1.15.1
- Platform: Linux
- Python version: 3.8
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3522/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3522/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3521 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3521/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3521/comments | https://api.github.com/repos/huggingface/datasets/issues/3521/events | https://github.com/huggingface/datasets/pull/3521 | 1,093,797,947 | PR_kwDODunzps4wiFCs | 3,521 | Vivos license update | {
"login": "mcmillanmajora",
"id": 26722925,
"node_id": "MDQ6VXNlcjI2NzIyOTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mcmillanmajora",
"html_url": "https://github.com/mcmillanmajora",
"followers_url": "https://api.github.com/users/mcmillanmajora/followers",
"following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}",
"gists_url": "https://api.github.com/users/mcmillanmajora/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mcmillanmajora/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mcmillanmajora/subscriptions",
"organizations_url": "https://api.github.com/users/mcmillanmajora/orgs",
"repos_url": "https://api.github.com/users/mcmillanmajora/repos",
"events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}",
"received_events_url": "https://api.github.com/users/mcmillanmajora/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,641,334,667,000 | 1,641,334,696,000 | 1,641,334,696,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3521",
"html_url": "https://github.com/huggingface/datasets/pull/3521",
"diff_url": "https://github.com/huggingface/datasets/pull/3521.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3521.patch",
"merged_at": null
} | Updated the license information with the link to the license text | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3521/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3521/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3520 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3520/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3520/comments | https://api.github.com/repos/huggingface/datasets/issues/3520/events | https://github.com/huggingface/datasets/pull/3520 | 1,093,747,753 | PR_kwDODunzps4wh6oD | 3,520 | Audio datacard update - first pass | {
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"I'm not sure that we want to change the tags at the top of the cards by hand. Those are used to create the tags in the hub. Although looking at all the tags now, we might want to normalize the current tags again (hyphens or no, \".0\" or no). Maybe we could add a binary tag for public domain or not?",
"> \r\n\r\nThat's a good point, I didn't realize these were auto-populated.\r\nAt the same time, some of them are wrong -- how/where are they auto-populated? Seems like we should fix it at that source for the future.\r\nIn the mean time, I see that \"cc0-1.0\" is the desired tag for public domain, so I will change that for now."
] | 1,641,329,905,000 | 1,641,385,821,000 | 1,641,385,820,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3520",
"html_url": "https://github.com/huggingface/datasets/pull/3520",
"diff_url": "https://github.com/huggingface/datasets/pull/3520.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3520.patch",
"merged_at": 1641385820000
} | Filling out data card "Personal and Sensitive Information" for speech datasets to note PII concerns | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3520/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3520/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3519 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3519/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3519/comments | https://api.github.com/repos/huggingface/datasets/issues/3519/events | https://github.com/huggingface/datasets/pull/3519 | 1,093,655,205 | PR_kwDODunzps4whnXH | 3,519 | CC100: Using HTTPS for the data source URL fixes load_dataset() | {
"login": "aajanki",
"id": 353043,
"node_id": "MDQ6VXNlcjM1MzA0Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/353043?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aajanki",
"html_url": "https://github.com/aajanki",
"followers_url": "https://api.github.com/users/aajanki/followers",
"following_url": "https://api.github.com/users/aajanki/following{/other_user}",
"gists_url": "https://api.github.com/users/aajanki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aajanki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aajanki/subscriptions",
"organizations_url": "https://api.github.com/users/aajanki/orgs",
"repos_url": "https://api.github.com/users/aajanki/repos",
"events_url": "https://api.github.com/users/aajanki/events{/privacy}",
"received_events_url": "https://api.github.com/users/aajanki/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,641,321,954,000 | 1,641,403,714,000 | 1,641,403,714,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3519",
"html_url": "https://github.com/huggingface/datasets/pull/3519",
"diff_url": "https://github.com/huggingface/datasets/pull/3519.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3519.patch",
"merged_at": 1641403714000
} | Without this change the following script (with any lang parameter) consistently fails. After changing to the HTTPS URL, the script works as expected.
```python
from datasets import load_dataset
dataset = load_dataset("cc100", lang="en")
```
This is the error produced by the previous script:
```sh
Using custom data configuration en-lang=en
Downloading and preparing dataset cc100/en to /home/antti/.cache/huggingface/datasets/cc100/en-lang=en/0.0.0/526ac20780de5e074cf73a7466e868cb67f960b48f6de42ff6a6c4e71910d71b...
Traceback (most recent call last):
File "/home/antti/tmp/cc100/cc100.py", line 3, in <module>
dataset = load_dataset("cc100", lang="en")
File "/home/antti/tmp/cc100/venv/lib/python3.9/site-packages/datasets/load.py", line 1694, in load_dataset
builder_instance.download_and_prepare(
File "/home/antti/tmp/cc100/venv/lib/python3.9/site-packages/datasets/builder.py", line 595, in download_and_prepare
self._download_and_prepare(
File "/home/antti/tmp/cc100/venv/lib/python3.9/site-packages/datasets/builder.py", line 661, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/antti/.cache/huggingface/modules/datasets_modules/datasets/cc100/526ac20780de5e074cf73a7466e868cb67f960b48f6de42ff6a6c4e71910d71b/cc100.py", line 117, in _split_generators
path = dl_manager.download_and_extract(download_url)
File "/home/antti/tmp/cc100/venv/lib/python3.9/site-packages/datasets/utils/download_manager.py", line 308, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/home/antti/tmp/cc100/venv/lib/python3.9/site-packages/datasets/utils/download_manager.py", line 196, in download
downloaded_path_or_paths = map_nested(
File "/home/antti/tmp/cc100/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 251, in map_nested
return function(data_struct)
File "/home/antti/tmp/cc100/venv/lib/python3.9/site-packages/datasets/utils/download_manager.py", line 217, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/home/antti/tmp/cc100/venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 298, in cached_path
output_path = get_from_cache(
File "/home/antti/tmp/cc100/venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 617, in get_from_cache
raise ConnectionError(f"Couldn't reach {url} (error {response.status_code})")
ConnectionError: Couldn't reach http://data.statmt.org/cc-100/en.txt.xz (error 503)
```
Note that I get the same behavior also using curl on the command line. The plain HTTP "curl -L http://data.statmt.org/cc-100/en.txt.xz" fails with "503 Service unavailable", but the with the HTTPS version of the URL curl starts downloading the file.
My guess is that the server does overly aggressive rate-limitting. When a client requests an HTTP URL, it (sensibly) gets redirected to the HTTPS equivalent, but now the server notices two requests coming from the same client (the original HTTP and the redirected HTTPS) during a brief time windows and rate-limitter kicks in and blocks the second request! If the client initally uses the HTTPS URL there's only one incoming request which the rate-limitter allows. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3519/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3519/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3518 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3518/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3518/comments | https://api.github.com/repos/huggingface/datasets/issues/3518/events | https://github.com/huggingface/datasets/issues/3518 | 1,093,063,455 | I_kwDODunzps5BJtMf | 3,518 | Add PubMed Central Open Access dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"In the framework of BigScience:\r\n- bigscience-workshop/data_tooling#121\r\n\r\nI have created this dataset as a community dataset: https://huggingface.co/datasets/albertvillanova/pmc_open_access\r\n\r\nHowever, I was wondering that it may be more appropriate to move it under an org namespace: `pubmed_central` or `pmc`\r\nThis way, we could add other datasets I'm also working on: Author Manuscript Dataset, Historical OCR Dataset, LitArch Open Access Subset.\r\n\r\nWhat do you think? @lhoestq @mariosasko ",
"Why not ! Having them under such namespaces would also help people searching for this kind of datasets.\r\nWe can also invite people from pubmed at one point",
"DONE: https://huggingface.co/datasets/pmc/open_access"
] | 1,641,279,275,000 | 1,642,433,157,000 | 1,642,433,157,000 | MEMBER | null | null | null | ## Adding a Dataset
- **Name:** PubMed Central Open Access
- **Description:** The PMC Open Access Subset includes more than 3.4 million journal articles and preprints that are made available under license terms that allow reuse.
- **Paper:** *link to the dataset paper if available*
- **Data:** https://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/
- **Motivation:** *what are some good reasons to have this dataset*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3518/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3518/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3517 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3517/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3517/comments | https://api.github.com/repos/huggingface/datasets/issues/3517/events | https://github.com/huggingface/datasets/pull/3517 | 1,092,726,651 | PR_kwDODunzps4wemwU | 3,517 | Add CPPE-5 dataset | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,641,234,680,000 | 1,641,408,782,000 | 1,641,408,782,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3517",
"html_url": "https://github.com/huggingface/datasets/pull/3517",
"diff_url": "https://github.com/huggingface/datasets/pull/3517.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3517.patch",
"merged_at": 1641408782000
} | Adds the recently released CPPE-5 dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3517/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3517/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3516 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3516/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3516/comments | https://api.github.com/repos/huggingface/datasets/issues/3516/events | https://github.com/huggingface/datasets/pull/3516 | 1,092,657,738 | PR_kwDODunzps4weYhE | 3,516 | dataset `asset` - change to raw.githubusercontent.com URLs | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,641,228,237,000 | 1,641,231,542,000 | 1,641,231,541,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3516",
"html_url": "https://github.com/huggingface/datasets/pull/3516",
"diff_url": "https://github.com/huggingface/datasets/pull/3516.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3516.patch",
"merged_at": 1641231541000
} | Changed the URLs to the ones it was automatically re-directing.
Before, the download was failing | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3516/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3516/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3515 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3515/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3515/comments | https://api.github.com/repos/huggingface/datasets/issues/3515/events | https://github.com/huggingface/datasets/issues/3515 | 1,092,624,695 | I_kwDODunzps5BICE3 | 3,515 | `ExpectedMoreDownloadedFiles` for `evidence_infer_treatment` | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | open | false | null | [] | null | [] | 1,641,225,518,000 | 1,642,425,937,000 | null | MEMBER | null | null | null | ## Describe the bug
I am trying to load a dataset called `evidence_infer_treatment`. The first subset (`1.1`) works fine but the second returns an error (`2.0`). It downloads a file but crashes during the checksums.
## Steps to reproduce the bug
```python
>>> from datasets import load_dataset
>>> load_dataset("evidence_infer_treatment", "2.0")
Downloading and preparing dataset evidence_infer_treatment/2.0 (download: 34.84 MiB, generated: 91.46 MiB, post-processed: Unknown size, total: 126.30 MiB) to /home/victor_huggingface_co/.cache/huggingface/datasets/evidence_infer_treatment/2.0/2.0.0/6812655bfd26cbaa58c84eab098bf6403694b06c6ae2ded603c55681868a1e24...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/load.py", line 1669, in load_dataset
use_auth_token=use_auth_token,
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/builder.py", line 594, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/builder.py", line 664, in _download_and_prepare
self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files"
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 33, in verify_checksums
raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums)))
datasets.utils.info_utils.ExpectedMoreDownloadedFiles: {'http://evidence-inference.ebm-nlp.com/v2.0.tar.gz'}
```
I did try to pass the argument `ignore_verifications=True` but run into an error when trying to build the dataset:
```python
>>> load_dataset("evidence_infer_treatment", "2.0", ignore_verifications=True, download_mode="force_redownload")
Downloading and preparing dataset evidence_infer_treatment/2.0 (download: 34.84 MiB, generated: 91.46 MiB, post-processed: Unknown size, total: 126.30 MiB) to /home/victor_huggingface_co/.cache/huggingface/datasets/evidence_infer_treatment/2.0/2.0.0/6812655bfd26cbaa58c84eab098bf6403694b06c6ae2ded603c55681868a1e24...
Downloading: 164MB [00:23, 6.98MB/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/load.py", line 1669, in load_dataset
use_auth_token=use_auth_token,
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/builder.py", line 594, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/builder.py", line 681, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/builder.py", line 1080, in _prepare_split
example = self.info.features.encode_example(record)
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/features/features.py", line 1032, in encode_example
return encode_nested_example(self, example)
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/features/features.py", line 807, in encode_nested_example
k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj)
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/features/features.py", line 807, in <dictcomp>
k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj)
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/features/features.py", line 829, in encode_nested_example
list_dict[k] = [encode_nested_example(dict_tuples[0], o) for o in dict_tuples[1:]]
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/features/features.py", line 829, in <listcomp>
list_dict[k] = [encode_nested_example(dict_tuples[0], o) for o in dict_tuples[1:]]
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/features/features.py", line 828, in encode_nested_example
for k, dict_tuples in utils.zip_dict(schema.feature, *obj):
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 136, in zip_dict
yield key, tuple(d[key] for d in dicts)
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 136, in <genexpr>
yield key, tuple(d[key] for d in dicts)
KeyError: ''
```
## Environment info
- `datasets` version: 1.16.1
- Platform: Linux-5.0.0-1020-gcp-x86_64-with-debian-buster-sid
- Python version: 3.7.11
- PyArrow version: 6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3515/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3515/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3514 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3514/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3514/comments | https://api.github.com/repos/huggingface/datasets/issues/3514/events | https://github.com/huggingface/datasets/pull/3514 | 1,092,606,383 | PR_kwDODunzps4weN9W | 3,514 | Fix to_tf_dataset references in docs | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The code snippet in [this section](https://huggingface.co/docs/datasets/master/use_dataset.html?highlight=to_tf_dataset#tensorflow) is missing an import (`DataCollatorWithPadding`) and doesn't initialize the TF model before the `model.fit` call."
] | 1,641,223,899,000 | 1,641,408,768,000 | 1,641,408,768,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3514",
"html_url": "https://github.com/huggingface/datasets/pull/3514",
"diff_url": "https://github.com/huggingface/datasets/pull/3514.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3514.patch",
"merged_at": 1641408767000
} | Fix the `to_tf_dataset` references in the docs. The currently failing example of usage will be fixed by #3338. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3514/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3514/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3513 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3513/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3513/comments | https://api.github.com/repos/huggingface/datasets/issues/3513/events | https://github.com/huggingface/datasets/pull/3513 | 1,092,569,802 | PR_kwDODunzps4weGWl | 3,513 | Add desc parameter to filter | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,641,221,058,000 | 1,641,407,485,000 | 1,641,407,485,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3513",
"html_url": "https://github.com/huggingface/datasets/pull/3513",
"diff_url": "https://github.com/huggingface/datasets/pull/3513.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3513.patch",
"merged_at": 1641407484000
} | Fix #3317 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3513/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3513/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3512 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3512/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3512/comments | https://api.github.com/repos/huggingface/datasets/issues/3512/events | https://github.com/huggingface/datasets/issues/3512 | 1,092,359,973 | I_kwDODunzps5BHBcl | 3,512 | No Data format found | {
"login": "shazzad47",
"id": 57741378,
"node_id": "MDQ6VXNlcjU3NzQxMzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/57741378?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shazzad47",
"html_url": "https://github.com/shazzad47",
"followers_url": "https://api.github.com/users/shazzad47/followers",
"following_url": "https://api.github.com/users/shazzad47/following{/other_user}",
"gists_url": "https://api.github.com/users/shazzad47/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shazzad47/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shazzad47/subscriptions",
"organizations_url": "https://api.github.com/users/shazzad47/orgs",
"repos_url": "https://api.github.com/users/shazzad47/repos",
"events_url": "https://api.github.com/users/shazzad47/events{/privacy}",
"received_events_url": "https://api.github.com/users/shazzad47/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | null | [] | null | [
"Hi, which dataset is giving you an error?"
] | 1,641,202,871,000 | 1,642,425,965,000 | 1,642,425,965,000 | NONE | null | null | null | ## Dataset viewer issue for '*name of the dataset*'
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3512/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3512/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3511 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3511/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3511/comments | https://api.github.com/repos/huggingface/datasets/issues/3511/events | https://github.com/huggingface/datasets/issues/3511 | 1,092,170,411 | I_kwDODunzps5BGTKr | 3,511 | Dataset | {
"login": "MIKURI0114",
"id": 92849978,
"node_id": "U_kgDOBYjHOg",
"avatar_url": "https://avatars.githubusercontent.com/u/92849978?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MIKURI0114",
"html_url": "https://github.com/MIKURI0114",
"followers_url": "https://api.github.com/users/MIKURI0114/followers",
"following_url": "https://api.github.com/users/MIKURI0114/following{/other_user}",
"gists_url": "https://api.github.com/users/MIKURI0114/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MIKURI0114/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MIKURI0114/subscriptions",
"organizations_url": "https://api.github.com/users/MIKURI0114/orgs",
"repos_url": "https://api.github.com/users/MIKURI0114/repos",
"events_url": "https://api.github.com/users/MIKURI0114/events{/privacy}",
"received_events_url": "https://api.github.com/users/MIKURI0114/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | null | [] | null | [
"Can you reopen with the correct dataset name (if relevant)?\r\n\r\nThanks",
"The dataset viewer was down tonight. It works again."
] | 1,641,175,403,000 | 1,641,199,286,000 | 1,641,198,187,000 | NONE | null | null | null | ## Dataset viewer issue for '*name of the dataset*'
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3511/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3511/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3510 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3510/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3510/comments | https://api.github.com/repos/huggingface/datasets/issues/3510/events | https://github.com/huggingface/datasets/issues/3510 | 1,091,997,004 | I_kwDODunzps5BFo1M | 3,510 | `wiki_dpr` details for Open Domain Question Answering tasks | {
"login": "pk1130",
"id": 40918514,
"node_id": "MDQ6VXNlcjQwOTE4NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/40918514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pk1130",
"html_url": "https://github.com/pk1130",
"followers_url": "https://api.github.com/users/pk1130/followers",
"following_url": "https://api.github.com/users/pk1130/following{/other_user}",
"gists_url": "https://api.github.com/users/pk1130/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pk1130/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pk1130/subscriptions",
"organizations_url": "https://api.github.com/users/pk1130/orgs",
"repos_url": "https://api.github.com/users/pk1130/repos",
"events_url": "https://api.github.com/users/pk1130/events{/privacy}",
"received_events_url": "https://api.github.com/users/pk1130/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi ! According to the DPR paper, the wikipedia dump is the one from Dec. 20, 2018.\r\nEach instance contains a paragraph of at most 100 word, as well as the title of the wikipedia page it comes from and the DPR embedding (a 768-d vector)."
] | 1,641,121,441,000 | 1,641,388,565,000 | null | NONE | null | null | null | Hey guys!
Thanks for creating the `wiki_dpr` dataset!
I am currently trying to use the dataset for context retrieval using DPR on NQ questions and need details about what each of the files and data instances mean, which version of the Wikipedia dump it uses, etc. Please respond at your earliest convenience regarding the same! Thanks a ton!
P.S.: (If one of @thomwolf @lewtun @lhoestq could respond, that would be even better since they have the first-hand details of the dataset. If anyone else has those, please reach out! Thanks!) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3510/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3510/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3507 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3507/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3507/comments | https://api.github.com/repos/huggingface/datasets/issues/3507/events | https://github.com/huggingface/datasets/issues/3507 | 1,091,214,808 | I_kwDODunzps5BCp3Y | 3,507 | Discuss whether support canonical datasets w/o dataset_infos.json and/or dummy data | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 2067400324,
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion",
"name": "generic discussion",
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library"
}
] | open | false | null | [] | null | [
"IMO, the data streaming test is good enough of a test that the dataset works correctly (assuming that we can more or less ensure that if streaming works then the non-streaming case will also work), so that for datasets that have a working dataset preview, we can remove the dummy data IMO. On the other hand, it seems like not all datasets have streaming enabled yet and for those datasets (if they are used a lot), I think it would be nice to continue testing some dummy data.\r\n\r\nI don't really have an opinion regarding the JSON metadata as I don't know enough about it.\r\n\r\n",
"I don't know all the details, but generally I'd be in favor of unifying the metadata formats into YAML inside .md (and so deprecating the dataset_infos.json) \r\n\r\n(Ultimately the CI can run on \"HuggingFace Actions\" instead of on GitHub)",
"The dataset_infos.json file currently has these useful infos for each dataset configuration, that I think can be moved to the dataset tags:\r\n- Size of the dataset in MB: download size, arrow file size, and total size (sum of download + arrow)\r\n- Size of each split in MB and number of examples. Again this can be moved to the dataset tags\r\n- Feature type of each column\r\n- supported task templates (it defines what columns correspond to the features and labels for example)\r\n\r\nBut it also has this, which I'm not sure if it should be in the tags or not:\r\n- Checksums of the downloaded files for integrity verifications\r\n\r\nSo ultimately this file could probably be deprecated in favor of having the infos in the tags.\r\n\r\n> Also note that for generating both (dataset_infos.json file and dummy data), the entire dataset needs being downloaded. This can be an issue for huge datasets (like WIT, with 400 GB of data).\r\n\r\nTo get the exact number of examples and size in MB of the dataset, one needs to download and generate it completely. IMO these infos are very important when someone considers using a dataset. Though using streaming we could do some extrapolation to have approximate values instead.\r\n\r\nFor the integrity verifications we also need the number of examples and the checksums of the downloaded files, so it requires the dataset to be fully downloaded once. This can be optional though.\r\n\r\n> IMO, the data streaming test is good enough of a test that the dataset works correctly (assuming that we can more or less ensure that if streaming works then the non-streaming case will also work)\r\n\r\nI agree with this. Usually if a dataset works in streaming mode, then it works in non-streaming mode (the other way around is not true though).\r\n\r\n> On the other hand, it seems like not all datasets have streaming enabled yet and for those datasets (if they are used a lot), I think it would be nice to continue testing some dummy data.\r\n\r\nYes indeed, or at least make sure that it was tested on the true data.",
"(note that if we wanted to display sizes, etc we could also pretty easily parse the `dataset_infos.json` on the hub side)",
"I agree that we can move the relevant parts of `dataset_infos.json` to the YAML tags.\r\n\r\n> On the other hand, it seems like not all datasets have streaming enabled yet and for those datasets (if they are used a lot), I think it would be nice to continue testing some dummy data. <\r\n> > Yes indeed, or at least make sure that it was tested on the true data.\r\n\r\nI like the idea of testing streaming and falling back to the dummy data test if streaming does not work. Generating dummy data can be very tedious, so this would be a nice incentive for the contributors to make their datasets streamable. "
] | 1,640,883,865,000 | 1,641,396,844,000 | null | MEMBER | null | null | null | I open this PR to have a public discussion about this topic and make a decision.
As previously discussed, once we have the metadata in the dataset card (README file, containing both Markdown info and YAML tags), what is the point of having also the JSON metadata (dataset_infos.json file)?
On the other hand, the dummy data is necessary for testing (in our CI suite) that the canonical dataset loads correctly. However:
- the dataset preview feature is already an indirect test that the dataset loads correctly (it also tests it is streamable though)
- we are migrating canonical datasets to the Hub
Do we really need to continue testing them in out CI?
Also note that for generating both (dataset_infos.json file and dummy data), the entire dataset needs being downloaded. This can be an issue for huge datasets (like WIT, with 400 GB of data).
Feel free to ping other people for the discussion.
CC: @lhoestq @mariosasko @thomwolf @julien-c @patrickvonplaten @anton-l @LysandreJik @yjernite @nateraw | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3507/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3507/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3506 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3506/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3506/comments | https://api.github.com/repos/huggingface/datasets/issues/3506/events | https://github.com/huggingface/datasets/pull/3506 | 1,091,166,595 | PR_kwDODunzps4wZpot | 3,506 | Allows DatasetDict.filter to have batching option | {
"login": "thomasw21",
"id": 24695242,
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomasw21",
"html_url": "https://github.com/thomasw21",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions",
"organizations_url": "https://api.github.com/users/thomasw21/orgs",
"repos_url": "https://api.github.com/users/thomasw21/repos",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomasw21/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,640,877,742,000 | 1,641,291,868,000 | 1,641,291,867,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3506",
"html_url": "https://github.com/huggingface/datasets/pull/3506",
"diff_url": "https://github.com/huggingface/datasets/pull/3506.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3506.patch",
"merged_at": 1641291867000
} | - Related to: #3244
- Fixes: #3503
We extends `.filter( ... batched: bool)` support to DatasetDict. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3506/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3506/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3505 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3505/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3505/comments | https://api.github.com/repos/huggingface/datasets/issues/3505/events | https://github.com/huggingface/datasets/issues/3505 | 1,091,150,820 | I_kwDODunzps5BCaPk | 3,505 | cast_column function not working with map function in streaming mode for Audio features | {
"login": "ashu5644",
"id": 8268102,
"node_id": "MDQ6VXNlcjgyNjgxMDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8268102?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ashu5644",
"html_url": "https://github.com/ashu5644",
"followers_url": "https://api.github.com/users/ashu5644/followers",
"following_url": "https://api.github.com/users/ashu5644/following{/other_user}",
"gists_url": "https://api.github.com/users/ashu5644/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ashu5644/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ashu5644/subscriptions",
"organizations_url": "https://api.github.com/users/ashu5644/orgs",
"repos_url": "https://api.github.com/users/ashu5644/repos",
"events_url": "https://api.github.com/users/ashu5644/events{/privacy}",
"received_events_url": "https://api.github.com/users/ashu5644/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi! This is probably due to the fact that `IterableDataset.map` sets `features` to `None` before mapping examples. We can fix the issue by passing the old `features` dict to the map generator and performing encoding/decoding there (before calling the map transform function)."
] | 1,640,875,921,000 | 1,641,298,270,000 | null | NONE | null | null | null | ## Describe the bug
I am trying to use Audio class for loading audio features using custom dataset. I am able to cast 'audio' feature into 'Audio' format with cast_column function. On using map function, I am not getting 'Audio' casted feature but getting path of audio file only.
I am getting features of 'audio' of string type with load_dataset call. After using cast_column 'audio' feature is converted into 'Audio' type. But in map function I am not able to get Audio type for audio feature & getting string type data containing path of file only. So I am not able to use processor in encode function.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets import load_dataset, Audio
from transformers import Wav2Vec2Processor
def encode(batch, processor):
print("Audio: ",batch['audio'])
batch["input_values"] = processor(batch["audio"]['array'], sampling_rate=16000).input_values
return batch
def print_ds(ds):
iterator = iter(ds)
for d in iterator:
print("Data: ",d)
break
processor = Wav2Vec2Processor.from_pretrained(pretrained_model_path)
dataset = load_dataset("custom_dataset.py","train",data_files={'train':'train_path.txt'},
data_dir="data", streaming=True, split="train")
print("Features: ",dataset.features)
print_ds(dataset)
dataset = dataset.cast_column("audio", Audio(sampling_rate=16_000))
print("Features: ",dataset.features)
print_ds(dataset)
dataset = dataset.map(lambda x: encode(x,processor))
print("Features: ",dataset.features)
print_ds(dataset)
```
## Expected results
map function not printing Audio type features be used with processor function and getting error in processor call due to this.
## Actual results
# after load_dataset call
Features: {'sentence': Value(dtype='string', id=None), 'audio': Value(dtype='string', id=None)}
Data: {'sentence': 'और अपने पेट को माँ की स्वादिष्ट गरमगरम जलेबियाँ हड़पते\n', 'audio': 'data/0116_003.wav'}
# after cast_column call
Features: {'sentence': Value(dtype='string', id=None), 'audio': Audio(sampling_rate=16000, mono=True, _storage_dtype='string', id=None)}
Data: {'sentence': 'और अपने पेट को माँ की स्वादिष्ट गरमगरम जलेबियाँ हड़पते\n', 'audio': {'path': 'data/0116_003.wav', 'array': array([ 1.2662281e-06, 1.0264218e-06, -1.3615092e-06, ...,
1.3017889e-02, 1.0085563e-02, 4.8155054e-03], dtype=float32), 'sampling_rate': 16000}}
# after map call
Features: None
Audio: data/0116_003.wav
Traceback (most recent call last):
File "demo2.py", line 36, in <module>
print_ds(dataset)
File "demo2.py", line 11, in print_ds
for d in iterator:
File "/opt/conda/lib/python3.7/site-packages/datasets/iterable_dataset.py", line 341, in __iter__
for key, example in self._iter():
File "/opt/conda/lib/python3.7/site-packages/datasets/iterable_dataset.py", line 338, in _iter
yield from ex_iterable
File "/opt/conda/lib/python3.7/site-packages/datasets/iterable_dataset.py", line 192, in __iter__
yield key, self.function(example)
File "demo2.py", line 32, in <lambda>
dataset = dataset.map(lambda x: batch_encode(x,processor))
File "demo2.py", line 6, in batch_encode
batch["input_values"] = processor(batch["audio"]['array'], sampling_rate=16000).input_values
TypeError: string indices must be integers
## Environment info
- `datasets` version: 1.17.0
- Platform: Linux-4.14.243 with-debian-bullseye-sid
- Python version: 3.7.9
- PyArrow version: 6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3505/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3505/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3504 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3504/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3504/comments | https://api.github.com/repos/huggingface/datasets/issues/3504/events | https://github.com/huggingface/datasets/issues/3504 | 1,090,682,230 | I_kwDODunzps5BAn12 | 3,504 | Unable to download PUBMED_title_abstracts_2019_baseline.jsonl.zst | {
"login": "ToddMorrill",
"id": 12600692,
"node_id": "MDQ6VXNlcjEyNjAwNjky",
"avatar_url": "https://avatars.githubusercontent.com/u/12600692?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ToddMorrill",
"html_url": "https://github.com/ToddMorrill",
"followers_url": "https://api.github.com/users/ToddMorrill/followers",
"following_url": "https://api.github.com/users/ToddMorrill/following{/other_user}",
"gists_url": "https://api.github.com/users/ToddMorrill/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ToddMorrill/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ToddMorrill/subscriptions",
"organizations_url": "https://api.github.com/users/ToddMorrill/orgs",
"repos_url": "https://api.github.com/users/ToddMorrill/repos",
"events_url": "https://api.github.com/users/ToddMorrill/events{/privacy}",
"received_events_url": "https://api.github.com/users/ToddMorrill/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @ToddMorrill, thanks for reporting.\r\n\r\nThree weeks ago I contacted the team who created the Pile dataset to report this issue with their data host server: https://the-eye.eu\r\n\r\nThey told me that unfortunately, the-eye was heavily affected by the recent tornado catastrophe in the US. They hope to have their data back online asap."
] | 1,640,802,200,000 | 1,642,426,133,000 | null | NONE | null | null | null | ## Describe the bug
I am unable to download the PubMed dataset from the link provided in the [Hugging Face Course (Chapter 5 Section 4)](https://huggingface.co/course/chapter5/4?fw=pt).
https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets import load_dataset
# This takes a few minutes to run, so go grab a tea or coffee while you wait :)
data_files = "https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst"
pubmed_dataset = load_dataset("json", data_files=data_files, split="train")
pubmed_dataset
```
I also tried with `wget` as follows.
```
wget https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst
```
## Expected results
I expect to be able to download this file.
## Actual results
Traceback
```
---------------------------------------------------------------------------
timeout Traceback (most recent call last)
/usr/lib/python3/dist-packages/urllib3/connection.py in _new_conn(self)
158 try:
--> 159 conn = connection.create_connection(
160 (self._dns_host, self.port), self.timeout, **extra_kw
/usr/lib/python3/dist-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options)
83 if err is not None:
---> 84 raise err
85
/usr/lib/python3/dist-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options)
73 sock.bind(source_address)
---> 74 sock.connect(sa)
75 return sock
timeout: timed out
During handling of the above exception, another exception occurred:
ConnectTimeoutError Traceback (most recent call last)
/usr/lib/python3/dist-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
664 # Make the request on the httplib connection object.
--> 665 httplib_response = self._make_request(
666 conn,
/usr/lib/python3/dist-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw)
375 try:
--> 376 self._validate_conn(conn)
377 except (SocketTimeout, BaseSSLError) as e:
/usr/lib/python3/dist-packages/urllib3/connectionpool.py in _validate_conn(self, conn)
995 if not getattr(conn, "sock", None): # AppEngine might not have `.sock`
--> 996 conn.connect()
997
/usr/lib/python3/dist-packages/urllib3/connection.py in connect(self)
313 # Add certificate verification
--> 314 conn = self._new_conn()
315 hostname = self.host
/usr/lib/python3/dist-packages/urllib3/connection.py in _new_conn(self)
163 except SocketTimeout:
--> 164 raise ConnectTimeoutError(
165 self,
ConnectTimeoutError: (<urllib3.connection.VerifiedHTTPSConnection object at 0x7f06dd698850>, 'Connection to the-eye.eu timed out. (connect timeout=10.0)')
During handling of the above exception, another exception occurred:
MaxRetryError Traceback (most recent call last)
/usr/lib/python3/dist-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies)
438 if not chunked:
--> 439 resp = conn.urlopen(
440 method=request.method,
/usr/lib/python3/dist-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
718
--> 719 retries = retries.increment(
720 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
/usr/lib/python3/dist-packages/urllib3/util/retry.py in increment(self, method, url, response, error, _pool, _stacktrace)
435 if new_retry.is_exhausted():
--> 436 raise MaxRetryError(_pool, url, error or ResponseError(cause))
437
MaxRetryError: HTTPSConnectionPool(host='the-eye.eu', port=443): Max retries exceeded with url: /public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst (Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection object at 0x7f06dd698850>, 'Connection to the-eye.eu timed out. (connect timeout=10.0)'))
During handling of the above exception, another exception occurred:
ConnectTimeout Traceback (most recent call last)
/tmp/ipykernel_15104/606583593.py in <module>
3 # This takes a few minutes to run, so go grab a tea or coffee while you wait :)
4 data_files = "https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst"
----> 5 pubmed_dataset = load_dataset("json", data_files=data_files, split="train")
6 pubmed_dataset
~/.local/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs)
1655
1656 # Create a dataset builder
-> 1657 builder_instance = load_dataset_builder(
1658 path=path,
1659 name=name,
~/.local/lib/python3.8/site-packages/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, script_version, **config_kwargs)
1492 download_config = download_config.copy() if download_config else DownloadConfig()
1493 download_config.use_auth_token = use_auth_token
-> 1494 dataset_module = dataset_module_factory(
1495 path, revision=revision, download_config=download_config, download_mode=download_mode, data_files=data_files
1496 )
~/.local/lib/python3.8/site-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_files, **download_kwargs)
1116 # Try packaged
1117 if path in _PACKAGED_DATASETS_MODULES:
-> 1118 return PackagedDatasetModuleFactory(
1119 path, data_files=data_files, download_config=download_config, download_mode=download_mode
1120 ).get_module()
~/.local/lib/python3.8/site-packages/datasets/load.py in get_module(self)
773 else get_patterns_locally(str(Path().resolve()))
774 )
--> 775 data_files = DataFilesDict.from_local_or_remote(patterns, use_auth_token=self.downnload_config.use_auth_token)
776 module_path, hash = _PACKAGED_DATASETS_MODULES[self.name]
777 builder_kwargs = {"hash": hash, "data_files": data_files}
~/.local/lib/python3.8/site-packages/datasets/data_files.py in from_local_or_remote(cls, patterns, base_path, allowed_extensions, use_auth_token)
576 for key, patterns_for_key in patterns.items():
577 out[key] = (
--> 578 DataFilesList.from_local_or_remote(
579 patterns_for_key,
580 base_path=base_path,
~/.local/lib/python3.8/site-packages/datasets/data_files.py in from_local_or_remote(cls, patterns, base_path, allowed_extensions, use_auth_token)
545 base_path = base_path if base_path is not None else str(Path().resolve())
546 data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)
--> 547 origin_metadata = _get_origin_metadata_locally_or_by_urls(data_files, use_auth_token=use_auth_token)
548 return cls(data_files, origin_metadata)
549
~/.local/lib/python3.8/site-packages/datasets/data_files.py in _get_origin_metadata_locally_or_by_urls(data_files, max_workers, use_auth_token)
492 data_files: List[Union[Path, Url]], max_workers=64, use_auth_token: Optional[Union[bool, str]] = None
493 ) -> Tuple[str]:
--> 494 return thread_map(
495 partial(_get_single_origin_metadata_locally_or_by_urls, use_auth_token=use_auth_token),
496 data_files,
~/.local/lib/python3.8/site-packages/tqdm/contrib/concurrent.py in thread_map(fn, *iterables, **tqdm_kwargs)
92 """
93 from concurrent.futures import ThreadPoolExecutor
---> 94 return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)
95
96
~/.local/lib/python3.8/site-packages/tqdm/contrib/concurrent.py in _executor_map(PoolExecutor, fn, *iterables, **tqdm_kwargs)
74 map_args.update(chunksize=chunksize)
75 with PoolExecutor(**pool_kwargs) as ex:
---> 76 return list(tqdm_class(ex.map(fn, *iterables, **map_args), **kwargs))
77
78
~/.local/lib/python3.8/site-packages/tqdm/notebook.py in __iter__(self)
252 def __iter__(self):
253 try:
--> 254 for obj in super(tqdm_notebook, self).__iter__():
255 # return super(tqdm...) will not catch exception
256 yield obj
~/.local/lib/python3.8/site-packages/tqdm/std.py in __iter__(self)
1171 # (note: keep this check outside the loop for performance)
1172 if self.disable:
-> 1173 for obj in iterable:
1174 yield obj
1175 return
/usr/lib/python3.8/concurrent/futures/_base.py in result_iterator()
617 # Careful not to keep a reference to the popped future
618 if timeout is None:
--> 619 yield fs.pop().result()
620 else:
621 yield fs.pop().result(end_time - time.monotonic())
/usr/lib/python3.8/concurrent/futures/_base.py in result(self, timeout)
442 raise CancelledError()
443 elif self._state == FINISHED:
--> 444 return self.__get_result()
445 else:
446 raise TimeoutError()
/usr/lib/python3.8/concurrent/futures/_base.py in __get_result(self)
387 if self._exception:
388 try:
--> 389 raise self._exception
390 finally:
391 # Break a reference cycle with the exception in self._exception
/usr/lib/python3.8/concurrent/futures/thread.py in run(self)
55
56 try:
---> 57 result = self.fn(*self.args, **self.kwargs)
58 except BaseException as exc:
59 self.future.set_exception(exc)
~/.local/lib/python3.8/site-packages/datasets/data_files.py in _get_single_origin_metadata_locally_or_by_urls(data_file, use_auth_token)
483 if isinstance(data_file, Url):
484 data_file = str(data_file)
--> 485 return (request_etag(data_file, use_auth_token=use_auth_token),)
486 else:
487 data_file = str(data_file.resolve())
~/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py in request_etag(url, use_auth_token)
489 def request_etag(url: str, use_auth_token: Optional[Union[str, bool]] = None) -> Optional[str]:
490 headers = get_authentication_headers_for_url(url, use_auth_token=use_auth_token)
--> 491 response = http_head(url, headers=headers, max_retries=3)
492 response.raise_for_status()
493 etag = response.headers.get("ETag") if response.ok else None
~/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py in http_head(url, proxies, headers, cookies, allow_redirects, timeout, max_retries)
474 headers = copy.deepcopy(headers) or {}
475 headers["user-agent"] = get_datasets_user_agent(user_agent=headers.get("user-agent"))
--> 476 response = _request_with_retry(
477 method="HEAD",
478 url=url,
~/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py in _request_with_retry(method, url, max_retries, base_wait_time, max_wait_time, timeout, **params)
407 except (requests.exceptions.ConnectTimeout, requests.exceptions.ConnectionError) as err:
408 if tries > max_retries:
--> 409 raise err
410 else:
411 logger.info(f"{method} request to {url} timed out, retrying... [{tries/max_retries}]")
~/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py in _request_with_retry(method, url, max_retries, base_wait_time, max_wait_time, timeout, **params)
403 tries += 1
404 try:
--> 405 response = requests.request(method=method.upper(), url=url, timeout=timeout, **params)
406 success = True
407 except (requests.exceptions.ConnectTimeout, requests.exceptions.ConnectionError) as err:
/usr/lib/python3/dist-packages/requests/api.py in request(method, url, **kwargs)
58 # cases, and look like a memory leak in others.
59 with sessions.Session() as session:
---> 60 return session.request(method=method, url=url, **kwargs)
61
62
/usr/lib/python3/dist-packages/requests/sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json)
531 }
532 send_kwargs.update(settings)
--> 533 resp = self.send(prep, **send_kwargs)
534
535 return resp
/usr/lib/python3/dist-packages/requests/sessions.py in send(self, request, **kwargs)
644
645 # Send the request
--> 646 r = adapter.send(request, **kwargs)
647
648 # Total elapsed time of the request (approximately)
/usr/lib/python3/dist-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies)
502 # TODO: Remove this in 3.0.0: see #2811
503 if not isinstance(e.reason, NewConnectionError):
--> 504 raise ConnectTimeout(e, request=request)
505
506 if isinstance(e.reason, ResponseError):
ConnectTimeout: HTTPSConnectionPool(host='the-eye.eu', port=443): Max retries exceeded with url: /public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst (Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection object at 0x7f06dd698850>, 'Connection to the-eye.eu timed out. (connect timeout=10.0)'))
```
## Environment info
- `datasets` version: 1.17.0
- Platform: Linux-5.11.0-43-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 6.0.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3504/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3504/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3503 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3503/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3503/comments | https://api.github.com/repos/huggingface/datasets/issues/3503/events | https://github.com/huggingface/datasets/issues/3503 | 1,090,472,735 | I_kwDODunzps5A_0sf | 3,503 | Batched in filter throws error | {
"login": "gpucce",
"id": 32967787,
"node_id": "MDQ6VXNlcjMyOTY3Nzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/32967787?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gpucce",
"html_url": "https://github.com/gpucce",
"followers_url": "https://api.github.com/users/gpucce/followers",
"following_url": "https://api.github.com/users/gpucce/following{/other_user}",
"gists_url": "https://api.github.com/users/gpucce/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gpucce/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gpucce/subscriptions",
"organizations_url": "https://api.github.com/users/gpucce/orgs",
"repos_url": "https://api.github.com/users/gpucce/repos",
"events_url": "https://api.github.com/users/gpucce/events{/privacy}",
"received_events_url": "https://api.github.com/users/gpucce/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "thomasw21",
"id": 24695242,
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomasw21",
"html_url": "https://github.com/thomasw21",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions",
"organizations_url": "https://api.github.com/users/thomasw21/orgs",
"repos_url": "https://api.github.com/users/thomasw21/repos",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomasw21/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "thomasw21",
"id": 24695242,
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomasw21",
"html_url": "https://github.com/thomasw21",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions",
"organizations_url": "https://api.github.com/users/thomasw21/orgs",
"repos_url": "https://api.github.com/users/thomasw21/repos",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomasw21/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,640,779,264,000 | 1,641,291,867,000 | 1,641,291,867,000 | NONE | null | null | null | I hope this is really a bug, I could not find it among the open issues
## Describe the bug
using `batched=False` in DataSet.filter throws error
```python
TypeError: filter() got an unexpected keyword argument 'batched'
```
but in the docs it is lister as an argument.
## Steps to reproduce the bug
```python
task = "mnli"
max_length = 128
tokenizer = AutoTokenizer.from_pretrained("./pretrained_models/pretrained_models_drozd/sl250.m.gsic.titech.ac.jp:8000/21.11.17_06.30.32_roberta-base_a0057/checkpoints/smpl_400M/hf/")
dataset = load_dataset("glue", task)
task_to_keys = {
"cola": ("sentence", None),
"mnli": ("premise", "hypothesis"),
"mnli-mm": ("premise", "hypothesis"),
"mrpc": ("sentence1", "sentence2"),
"qnli": ("question", "sentence"),
"qqp": ("question1", "question2"),
"rte": ("sentence1", "sentence2"),
"sst2": ("sentence", None),
"stsb": ("sentence1", "sentence2"),
"wnli": ("sentence1", "sentence2"),
}
##### tokenization_parameters
sentence1_key, sentence2_key = task_to_keys[task]
def preprocess_function(examples, max_length):
if sentence2_key is None:
return tokenizer(
examples[sentence1_key], truncation=True, max_length=max_length
)
return tokenizer(
examples[sentence1_key],
examples[sentence2_key],
truncation=False,
padding="max_length",
max_length=max_length,
)
encoded_dataset = dataset.map(
lambda x: preprocess_function(x, max_length=max_length), batched=False
)
encoded_dataset.filter(lambda x: len(x['input_ids']) <= max_length, batched=False)
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.16.1, 1.17.0
- Platform: ubuntu
- Python version: 3.8.12
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3503/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3503/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3502 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3502/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3502/comments | https://api.github.com/repos/huggingface/datasets/issues/3502/events | https://github.com/huggingface/datasets/pull/3502 | 1,090,438,558 | PR_kwDODunzps4wXSLi | 3,502 | Add QuALITY | {
"login": "jaketae",
"id": 25360440,
"node_id": "MDQ6VXNlcjI1MzYwNDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/25360440?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jaketae",
"html_url": "https://github.com/jaketae",
"followers_url": "https://api.github.com/users/jaketae/followers",
"following_url": "https://api.github.com/users/jaketae/following{/other_user}",
"gists_url": "https://api.github.com/users/jaketae/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jaketae/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jaketae/subscriptions",
"organizations_url": "https://api.github.com/users/jaketae/orgs",
"repos_url": "https://api.github.com/users/jaketae/repos",
"events_url": "https://api.github.com/users/jaketae/events{/privacy}",
"received_events_url": "https://api.github.com/users/jaketae/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,640,775,526,000 | 1,641,812,032,000 | null | CONTRIBUTOR | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3502",
"html_url": "https://github.com/huggingface/datasets/pull/3502",
"diff_url": "https://github.com/huggingface/datasets/pull/3502.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3502.patch",
"merged_at": null
} | Fixes #3441. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3502/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3502/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3501 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3501/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3501/comments | https://api.github.com/repos/huggingface/datasets/issues/3501/events | https://github.com/huggingface/datasets/pull/3501 | 1,090,413,758 | PR_kwDODunzps4wXM8H | 3,501 | Update pib dataset card | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,640,772,880,000 | 1,640,776,401,000 | 1,640,776,401,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3501",
"html_url": "https://github.com/huggingface/datasets/pull/3501",
"diff_url": "https://github.com/huggingface/datasets/pull/3501.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3501.patch",
"merged_at": 1640776401000
} | Related to #3496 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3501/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3501/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3500 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3500/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3500/comments | https://api.github.com/repos/huggingface/datasets/issues/3500/events | https://github.com/huggingface/datasets/pull/3500 | 1,090,406,133 | PR_kwDODunzps4wXLTB | 3,500 | Docs: Add VCTK dataset description | {
"login": "jaketae",
"id": 25360440,
"node_id": "MDQ6VXNlcjI1MzYwNDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/25360440?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jaketae",
"html_url": "https://github.com/jaketae",
"followers_url": "https://api.github.com/users/jaketae/followers",
"following_url": "https://api.github.com/users/jaketae/following{/other_user}",
"gists_url": "https://api.github.com/users/jaketae/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jaketae/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jaketae/subscriptions",
"organizations_url": "https://api.github.com/users/jaketae/orgs",
"repos_url": "https://api.github.com/users/jaketae/repos",
"events_url": "https://api.github.com/users/jaketae/events{/privacy}",
"received_events_url": "https://api.github.com/users/jaketae/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,640,772,125,000 | 1,641,293,162,000 | 1,641,291,909,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3500",
"html_url": "https://github.com/huggingface/datasets/pull/3500",
"diff_url": "https://github.com/huggingface/datasets/pull/3500.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3500.patch",
"merged_at": 1641291909000
} | This PR is a very minor followup to #1837, with only docs changes (single comment string). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3500/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3500/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3499 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3499/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3499/comments | https://api.github.com/repos/huggingface/datasets/issues/3499/events | https://github.com/huggingface/datasets/issues/3499 | 1,090,132,618 | I_kwDODunzps5A-hqK | 3,499 | Adjusting chunk size for streaming datasets | {
"login": "JoelNiklaus",
"id": 3775944,
"node_id": "MDQ6VXNlcjM3NzU5NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/3775944?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JoelNiklaus",
"html_url": "https://github.com/JoelNiklaus",
"followers_url": "https://api.github.com/users/JoelNiklaus/followers",
"following_url": "https://api.github.com/users/JoelNiklaus/following{/other_user}",
"gists_url": "https://api.github.com/users/JoelNiklaus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JoelNiklaus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JoelNiklaus/subscriptions",
"organizations_url": "https://api.github.com/users/JoelNiklaus/orgs",
"repos_url": "https://api.github.com/users/JoelNiklaus/repos",
"events_url": "https://api.github.com/users/JoelNiklaus/events{/privacy}",
"received_events_url": "https://api.github.com/users/JoelNiklaus/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi ! Data streaming uses `fsspec` to read the data files progressively. IIRC the block size for buffering is 5MiB by default. So every time you finish iterating over a block, it downloads the next one. You can still try to increase the `fsspec` block size for buffering if it can help. To do so you just need to increase `fsspec.spec.AbstractBufferedFile.DEFAULT_BLOCK_SIZE `\r\n\r\nCurrently this is unfortunately done in a single thread, so it blocks the processing to download and uncompress the next block. At one point it would be nice to be able to do that in parallel !",
"Hi! Thanks for the help, I will try it :)"
] | 1,640,726,273,000 | 1,641,569,961,000 | null | CONTRIBUTOR | null | null | null | **Is your feature request related to a problem? Please describe.**
I want to use mc4 which I cannot save locally, so I stream it. However, I want to process the entire dataset and filter some documents from it. With the current chunk size of around 1000 documents (right?) I hit a performance bottleneck because of the frequent decompressing.
**Describe the solution you'd like**
I would appreciate a parameter in the load_dataset function, that allows me to set the chunksize myself (to a value like 100'000 in my case). Like that, I hope to improve the processing time.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3499/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3499/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3498 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3498/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3498/comments | https://api.github.com/repos/huggingface/datasets/issues/3498/events | https://github.com/huggingface/datasets/pull/3498 | 1,090,096,332 | PR_kwDODunzps4wWL5U | 3,498 | update `pretty_name` for first 200 datasets | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,640,721,007,000 | 1,641,408,503,000 | 1,641,400,701,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3498",
"html_url": "https://github.com/huggingface/datasets/pull/3498",
"diff_url": "https://github.com/huggingface/datasets/pull/3498.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3498.patch",
"merged_at": 1641400701000
} | I made a script some time back to fetch `pretty_names` from `papers_with_code` dataset along with some other rules incase that dataset wasn't available on `papers_with_code`. Updating them in the `README` of `datasets`. Took only the first 200 datasets into consideration and after some eyeballing, most of them were looking good to me! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3498/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3498/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3497 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3497/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3497/comments | https://api.github.com/repos/huggingface/datasets/issues/3497/events | https://github.com/huggingface/datasets/issues/3497 | 1,090,050,148 | I_kwDODunzps5A-Nhk | 3,497 | Changing sampling rate in audio dataset and subsequently mapping with `num_proc > 1` leads to weird bug | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Same error occures when using max samples with https://github.com/huggingface/transformers/blob/master/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py",
"I'm seeing this too, when using preprocessing_num_workers with \r\nhttps://github.com/huggingface/transformers/blob/master/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py"
] | 1,640,714,629,000 | 1,641,292,136,000 | null | MEMBER | null | null | null | Running:
```python
from datasets import load_dataset, DatasetDict
import datasets
from transformers import AutoFeatureExtractor
raw_datasets = DatasetDict()
raw_datasets["train"] = load_dataset("common_voice", "ab", split="train")
feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base")
raw_datasets = raw_datasets.cast_column(
"audio", datasets.features.Audio(sampling_rate=feature_extractor.sampling_rate)
)
num_workers = 16
def prepare_dataset(batch):
sample = batch["audio"]
inputs = feature_extractor(sample["array"], sampling_rate=sample["sampling_rate"])
batch["input_values"] = inputs.input_values[0]
batch["input_length"] = len(batch["input_values"])
return batch
raw_datasets.map(
prepare_dataset,
remove_columns=next(iter(raw_datasets.values())).column_names,
num_proc=16,
desc="preprocess datasets",
)
```
gives
```bash
File "/home/patrick/experiments/run_bug.py", line 25, in <module>
raw_datasets.map(
File "/home/patrick/python_bin/datasets/dataset_dict.py", line 492, in map
{
File "/home/patrick/python_bin/datasets/dataset_dict.py", line 493, in <dictcomp>
k: dataset.map(
File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 2139, in map
shards = [
File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 2140, in <listcomp>
self.shard(num_shards=num_proc, index=rank, contiguous=True, keep_in_memory=keep_in_memory)
File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 3164, in shard
return self.select(
File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 485, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/patrick/python_bin/datasets/fingerprint.py", line 411, in wrapper
out = func(self, *args, **kwargs)
File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 2756, in select
return self._new_dataset_with_indices(indices_buffer=buf_writer.getvalue(), fingerprint=new_fingerprint)
File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 2667, in _new_dataset_with_indices
return Dataset(
File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 659, in __init__
raise ValueError(
ValueError: External features info don't match the dataset:
Got
{'client_id': Value(dtype='string', id=None), 'path': Value(dtype='string', id=None), 'audio': Audio(sampling_rate=16000, mono=True, _storage_dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'up_votes': Value(dtype='int64', id=None), 'down_votes': Value(dtype='int64', id=None), 'age': Value(dtype='string', id=None), 'gender': Value(dtype='string', id=None), 'accent': Value(dtype='string', id=None), 'locale': Value(dtype='string', id=None), 'segment': Value(dtype='string', id=None)}
with type
struct<client_id: string, path: string, audio: string, sentence: string, up_votes: int64, down_votes: int64, age: string, gender: string, accent: string, locale: string, segment: string>
but expected something like
{'client_id': Value(dtype='string', id=None), 'path': Value(dtype='string', id=None), 'audio': {'path': Value(dtype='string', id=None), 'bytes': Value(dtype='binary', id=None)}, 'sentence': Value(dtype='string', id=None), 'up_votes': Value(dtype='int64', id=None), 'down_votes': Value(dtype='int64', id=None), 'age': Value(dtype='string', id=None), 'gender': Value(dtype='string', id=None), 'accent': Value(dtype='string', id=None), 'locale': Value(dtype='string', id=None), 'segment': Value(dtype='string', id=None)}
with type
struct<client_id: string, path: string, audio: struct<path: string, bytes: binary>, sentence: string, up_votes: int64, down_votes: int64, age: string, gender: string, accent: string, locale: string, segment: string>
```
Versions:
```python
- `datasets` version: 1.16.2.dev0
- Platform: Linux-5.15.8-76051508-generic-x86_64-with-glibc2.33
- Python version: 3.9.7
- PyArrow version: 6.0.1
```
and `transformers`:
```
- `transformers` version: 4.16.0.dev0
- Platform: Linux-5.15.8-76051508-generic-x86_64-with-glibc2.33
- Python version: 3.9.7
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3497/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3497/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3496 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3496/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3496/comments | https://api.github.com/repos/huggingface/datasets/issues/3496/events | https://github.com/huggingface/datasets/pull/3496 | 1,089,989,155 | PR_kwDODunzps4wV1_w | 3,496 | Update version of pib dataset and make it streamable | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"It seems like there is still an error: `Message: 'TarContainedFile' object has no attribute 'readable'`\r\n\r\nhttps://huggingface.co/datasets/pib/viewer",
"@severo I was wondering about that...\r\n\r\nIt works fine when I run it in streaming mode in my terminal:\r\n```python\r\nIn [3]: from datasets import load_dataset; ds = load_dataset(\"pib\", \"gu-pa\", split=\"train\", streaming=True); item = next(iter(ds))\r\n\r\nIn [4]: item\r\nOut[4]: \r\n{'translation': {'gu': 'એવો નિર્ણય લેવાયો હતો કે ખંતપૂર્વકની કામગીરી હાથ ધરવા, કાયદેસર અને ટેકનિકલ મૂલ્યાંકન કરવા, વેન્ચર કેપિટલ ઇન્વેસ્ટમેન્ટ સમિતિની બેઠક યોજવા વગેરે એઆઇએફને કરવામાં આવેલ પ્રતિબદ્ધતાના 0.50 ટકા સુધી અને બાકીની રકમ એફએફએસને પૂર્ણ કરવામાં આવશે.',\r\n 'pa': 'ਇਹ ਵੀ ਫੈਸਲਾ ਕੀਤਾ ਗਿਆ ਕਿ ਐੱਫਆਈਆਈ ਅਤੇ ਬਕਾਏ ਲਈ ਕੀਤੀਆਂ ਗਈਆਂ ਵਚਨਬੱਧਤਾਵਾਂ ਦੇ 0.50 % ਦੀ ਸੀਮਾ ਤੱਕ ਐੱਫਈਐੱਸ ਨੂੰ ਮਿਲਿਆ ਜਾਏਗਾ, ਇਸ ਨਾਲ ਉੱਦਮ ਪੂੰਜੀ ਨਿਵੇਸ਼ ਕਮੇਟੀ ਦੀ ਬੈਠਕ ਦਾ ਆਯੋਜਨ ਉਚਿਤ ਸਾਵਧਾਨੀ, ਕਾਨੂੰਨੀ ਅਤੇ ਤਕਨੀਕੀ ਮੁੱਲਾਂਕਣ ਲਈ ਸੰਚਾਲਨ ਖਰਚ ਆਦਿ ਦੀ ਪੂਰਤੀ ਹੋਵੇਗੀ।'}}\r\n```",
"OK, it works now!\r\n\r\n<img width=\"794\" alt=\"Capture d’écran 2022-01-03 à 15 41 44\" src=\"https://user-images.githubusercontent.com/1676121/147943676-6199d1a9-f288-4350-af96-a7c297ebb743.png\">\r\n"
] | 1,640,707,315,000 | 1,641,220,948,000 | 1,640,767,377,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3496",
"html_url": "https://github.com/huggingface/datasets/pull/3496",
"diff_url": "https://github.com/huggingface/datasets/pull/3496.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3496.patch",
"merged_at": 1640767377000
} | This PR:
- Updates version of pib dataset: from 0.0.0 to 1.3.0
- Makes the dataset streamable
Fix #3491.
CC: @severo | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3496/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3496/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3495 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3495/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3495/comments | https://api.github.com/repos/huggingface/datasets/issues/3495/events | https://github.com/huggingface/datasets/issues/3495 | 1,089,983,632 | I_kwDODunzps5A99SQ | 3,495 | Add VoxLingua107 | {
"login": "jaketae",
"id": 25360440,
"node_id": "MDQ6VXNlcjI1MzYwNDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/25360440?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jaketae",
"html_url": "https://github.com/jaketae",
"followers_url": "https://api.github.com/users/jaketae/followers",
"following_url": "https://api.github.com/users/jaketae/following{/other_user}",
"gists_url": "https://api.github.com/users/jaketae/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jaketae/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jaketae/subscriptions",
"organizations_url": "https://api.github.com/users/jaketae/orgs",
"repos_url": "https://api.github.com/users/jaketae/repos",
"events_url": "https://api.github.com/users/jaketae/events{/privacy}",
"received_events_url": "https://api.github.com/users/jaketae/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | open | false | null | [] | null | [] | 1,640,706,703,000 | 1,640,706,703,000 | null | CONTRIBUTOR | null | null | null | ## Adding a Dataset
- **Name:** VoxLingua107
- **Description:** VoxLingua107 is a speech dataset for training spoken language identification models.
- **Paper:** https://arxiv.org/abs/2011.12998
- **Data:** http://bark.phon.ioc.ee/voxlingua107/
- **Motivation:** 107 languages, totaling 6628 hours for the train split.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3495/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3495/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3494 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3494/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3494/comments | https://api.github.com/repos/huggingface/datasets/issues/3494/events | https://github.com/huggingface/datasets/pull/3494 | 1,089,983,103 | PR_kwDODunzps4wV0vB | 3,494 | Clone full repo to detect new tags when mirroring datasets on the Hub | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Good catch !!",
"The CI fail is unrelated to this PR and fixed on master, merging :)"
] | 1,640,706,647,000 | 1,640,707,641,000 | 1,640,707,640,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3494",
"html_url": "https://github.com/huggingface/datasets/pull/3494",
"diff_url": "https://github.com/huggingface/datasets/pull/3494.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3494.patch",
"merged_at": 1640707640000
} | The new releases of `datasets` were not detected because the shallow clone in the CI wasn't getting the git tags.
By cloning the full repository we can properly detect a new release, and tag all the dataset repositories accordingly
cc @SBrandeis | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3494/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3494/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3493 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3493/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3493/comments | https://api.github.com/repos/huggingface/datasets/issues/3493/events | https://github.com/huggingface/datasets/pull/3493 | 1,089,967,286 | PR_kwDODunzps4wVxfr | 3,493 | Fix VCTK encoding | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,640,705,016,000 | 1,640,706,498,000 | 1,640,706,497,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3493",
"html_url": "https://github.com/huggingface/datasets/pull/3493",
"diff_url": "https://github.com/huggingface/datasets/pull/3493.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3493.patch",
"merged_at": 1640706497000
} | utf-8 encoding was missing in the VCTK dataset builder added in #3351 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3493/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3493/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3492 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3492/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3492/comments | https://api.github.com/repos/huggingface/datasets/issues/3492/events | https://github.com/huggingface/datasets/pull/3492 | 1,089,952,943 | PR_kwDODunzps4wVufr | 3,492 | Add `gzip` for `to_json` | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,640,703,671,000 | 1,641,387,816,000 | 1,641,387,816,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3492",
"html_url": "https://github.com/huggingface/datasets/pull/3492",
"diff_url": "https://github.com/huggingface/datasets/pull/3492.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3492.patch",
"merged_at": 1641387815000
} | (Partially) closes #3480. I have added `gzip` compression for `to_json`. I realised we can run into this compression problem with `to_csv` as well. `IOHandler` can be used for `to_csv` too. Please let me know if any changes are required. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3492/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3492/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3491 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3491/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3491/comments | https://api.github.com/repos/huggingface/datasets/issues/3491/events | https://github.com/huggingface/datasets/issues/3491 | 1,089,918,018 | I_kwDODunzps5A9tRC | 3,491 | Update version of pib dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,640,700,238,000 | 1,640,767,377,000 | 1,640,767,377,000 | MEMBER | null | null | null | On the Hub we have v0, while there exists v1.3.
Related to bigscience-workshop/data_tooling#130
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3491/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3491/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3490 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3490/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3490/comments | https://api.github.com/repos/huggingface/datasets/issues/3490/events | https://github.com/huggingface/datasets/issues/3490 | 1,089,730,181 | I_kwDODunzps5A8_aF | 3,490 | Does datasets support load text from HDFS? | {
"login": "dancingpipi",
"id": 20511825,
"node_id": "MDQ6VXNlcjIwNTExODI1",
"avatar_url": "https://avatars.githubusercontent.com/u/20511825?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dancingpipi",
"html_url": "https://github.com/dancingpipi",
"followers_url": "https://api.github.com/users/dancingpipi/followers",
"following_url": "https://api.github.com/users/dancingpipi/following{/other_user}",
"gists_url": "https://api.github.com/users/dancingpipi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dancingpipi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dancingpipi/subscriptions",
"organizations_url": "https://api.github.com/users/dancingpipi/orgs",
"repos_url": "https://api.github.com/users/dancingpipi/repos",
"events_url": "https://api.github.com/users/dancingpipi/events{/privacy}",
"received_events_url": "https://api.github.com/users/dancingpipi/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi ! `datasets` currently supports reading local files or files over HTTP. We may add support for other filesystems (cloud storages, hdfs...) at one point though :)"
] | 1,640,681,762,000 | 1,641,395,411,000 | null | NONE | null | null | null | The raw text data is stored on HDFS due to the dataset's size is too large to store on my develop machine,
so I wander does datasets support read data from hdfs? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3490/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3490/timeline | null | false |