url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
49
51
id
int64
971M
1.13B
node_id
stringlengths
18
32
number
int64
2.8k
3.71k
title
stringlengths
2
276
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
int64
0
18
created_at
unknown
updated_at
unknown
closed_at
unknown
author_association
stringclasses
3 values
active_lock_reason
null
body
stringlengths
9
36.2k
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
null
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/datasets/issues/2999
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2999/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2999/comments
https://api.github.com/repos/huggingface/datasets/issues/2999/events
https://github.com/huggingface/datasets/pull/2999
1,013,536,933
PR_kwDODunzps4skgCm
2,999
Set trivia_qa writer batch size
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2021-10-01T16:23:26"
"2021-10-01T16:34:55"
"2021-10-01T16:34:55"
MEMBER
null
Save some RAM when generating trivia_qa
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2999/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2999/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2999", "html_url": "https://github.com/huggingface/datasets/pull/2999", "diff_url": "https://github.com/huggingface/datasets/pull/2999.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2999.patch", "merged_at": "2021-10-01T16:34:55" }
https://api.github.com/repos/huggingface/datasets/issues/2998
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2998/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2998/comments
https://api.github.com/repos/huggingface/datasets/issues/2998/events
https://github.com/huggingface/datasets/issues/2998
1,013,372,871
I_kwDODunzps48ZtfH
2,998
cannot shuffle dataset loaded from disk
{ "login": "pya25", "id": 54274249, "node_id": "MDQ6VXNlcjU0Mjc0MjQ5", "avatar_url": "https://avatars.githubusercontent.com/u/54274249?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pya25", "html_url": "https://github.com/pya25", "followers_url": "https://api.github.com/users/pya25/followers", "following_url": "https://api.github.com/users/pya25/following{/other_user}", "gists_url": "https://api.github.com/users/pya25/gists{/gist_id}", "starred_url": "https://api.github.com/users/pya25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pya25/subscriptions", "organizations_url": "https://api.github.com/users/pya25/orgs", "repos_url": "https://api.github.com/users/pya25/repos", "events_url": "https://api.github.com/users/pya25/events{/privacy}", "received_events_url": "https://api.github.com/users/pya25/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
0
"2021-10-01T13:49:52"
"2021-10-01T13:49:52"
null
NONE
null
## Describe the bug dataset loaded from disk cannot be shuffled. ## Steps to reproduce the bug ``` my_dataset = load_from_disk('s3://my_file/validate', fs=s3) sample = my_dataset.select(range(100)).shuffle(seed=1234) ``` ## Actual results ``` sample = my_dataset .select(range(100)).shuffle(seed=1234) File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 185, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/datasets/fingerprint.py", line 398, in wrapper out = func(self, *args, **kwargs) File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2494, in shuffle new_fingerprint=new_fingerprint, File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 185, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/datasets/fingerprint.py", line 398, in wrapper out = func(self, *args, **kwargs) File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2303, in select tmp_file = tempfile.NamedTemporaryFile("wb", dir=os.path.dirname(indices_cache_file_name), delete=False) File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/tempfile.py", line 547, in NamedTemporaryFile (fd, name) = _mkstemp_inner(dir, prefix, suffix, flags, output_type) File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/tempfile.py", line 258, in _mkstemp_inner fd = _os.open(file, flags, 0o600) FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmpnnu5uhnx/my_file/validate/tmpy76d70g4' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.12.1 - Python version: 3.7 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2998/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2998/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2997
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2997/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2997/comments
https://api.github.com/repos/huggingface/datasets/issues/2997/events
https://github.com/huggingface/datasets/issues/2997
1,013,270,069
I_kwDODunzps48ZUY1
2,997
Dataset has incorrect labels
{ "login": "marshmellow77", "id": 63367770, "node_id": "MDQ6VXNlcjYzMzY3Nzcw", "avatar_url": "https://avatars.githubusercontent.com/u/63367770?v=4", "gravatar_id": "", "url": "https://api.github.com/users/marshmellow77", "html_url": "https://github.com/marshmellow77", "followers_url": "https://api.github.com/users/marshmellow77/followers", "following_url": "https://api.github.com/users/marshmellow77/following{/other_user}", "gists_url": "https://api.github.com/users/marshmellow77/gists{/gist_id}", "starred_url": "https://api.github.com/users/marshmellow77/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/marshmellow77/subscriptions", "organizations_url": "https://api.github.com/users/marshmellow77/orgs", "repos_url": "https://api.github.com/users/marshmellow77/repos", "events_url": "https://api.github.com/users/marshmellow77/events{/privacy}", "received_events_url": "https://api.github.com/users/marshmellow77/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
"2021-10-01T12:09:06"
"2021-10-01T15:32:00"
"2021-10-01T13:54:34"
NONE
null
The dataset https://huggingface.co/datasets/turkish_product_reviews has incorrect labels - all reviews are labelled with "1" (positive sentiment). None of the reviews is labelled with "0". See screenshot attached: ![Capture](https://user-images.githubusercontent.com/63367770/135617428-14ce0b27-5208-4e66-a3ee-71542e3257b4.PNG)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2997/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2997/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2996
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2996/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2996/comments
https://api.github.com/repos/huggingface/datasets/issues/2996/events
https://github.com/huggingface/datasets/pull/2996
1,013,266,373
PR_kwDODunzps4sjrP6
2,996
Remove all query parameters when extracting protocol
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
4
"2021-10-01T12:05:34"
"2021-10-04T08:48:13"
"2021-10-04T08:48:13"
MEMBER
null
Fix `_get_extraction_protocol` to remove all query parameters, like `?raw=true`, `?dl=1`,...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2996/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2996/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2996", "html_url": "https://github.com/huggingface/datasets/pull/2996", "diff_url": "https://github.com/huggingface/datasets/pull/2996.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2996.patch", "merged_at": "2021-10-04T08:48:13" }
https://api.github.com/repos/huggingface/datasets/issues/2995
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2995/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2995/comments
https://api.github.com/repos/huggingface/datasets/issues/2995/events
https://github.com/huggingface/datasets/pull/2995
1,013,143,868
PR_kwDODunzps4sjThd
2,995
Fix trivia_qa unfiltered
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2021-10-01T09:53:43"
"2021-10-01T10:04:11"
"2021-10-01T10:04:10"
MEMBER
null
Fix https://github.com/huggingface/datasets/issues/2993
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2995/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2995/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2995", "html_url": "https://github.com/huggingface/datasets/pull/2995", "diff_url": "https://github.com/huggingface/datasets/pull/2995.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2995.patch", "merged_at": "2021-10-01T10:04:10" }
https://api.github.com/repos/huggingface/datasets/issues/2994
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2994/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2994/comments
https://api.github.com/repos/huggingface/datasets/issues/2994/events
https://github.com/huggingface/datasets/pull/2994
1,013,000,475
PR_kwDODunzps4si4I2
2,994
Fix loading compressed CSV without streaming
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2021-10-01T07:28:59"
"2021-10-01T15:53:16"
"2021-10-01T15:53:16"
MEMBER
null
When implementing support to stream CSV files (https://github.com/huggingface/datasets/commit/ad489d4597381fc2d12c77841642cbeaecf7a2e0#diff-6f60f8d0552b75be8b3bfd09994480fd60dcd4e7eb08d02f721218c3acdd2782), a regression was introduced preventing loading compressed CSV files in non-streaming mode. This PR fixes it, allowing loading compressed/uncompressed CSV files in streaming/non-streaming mode. Fix #2977.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2994/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2994/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2994", "html_url": "https://github.com/huggingface/datasets/pull/2994", "diff_url": "https://github.com/huggingface/datasets/pull/2994.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2994.patch", "merged_at": "2021-10-01T15:53:15" }
https://api.github.com/repos/huggingface/datasets/issues/2993
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2993/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2993/comments
https://api.github.com/repos/huggingface/datasets/issues/2993/events
https://github.com/huggingface/datasets/issues/2993
1,012,702,665
I_kwDODunzps48XJ3J
2,993
Can't download `trivia_qa/unfiltered`
{ "login": "VictorSanh", "id": 16107619, "node_id": "MDQ6VXNlcjE2MTA3NjE5", "avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/VictorSanh", "html_url": "https://github.com/VictorSanh", "followers_url": "https://api.github.com/users/VictorSanh/followers", "following_url": "https://api.github.com/users/VictorSanh/following{/other_user}", "gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}", "starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions", "organizations_url": "https://api.github.com/users/VictorSanh/orgs", "repos_url": "https://api.github.com/users/VictorSanh/repos", "events_url": "https://api.github.com/users/VictorSanh/events{/privacy}", "received_events_url": "https://api.github.com/users/VictorSanh/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
3
"2021-09-30T23:00:18"
"2021-10-01T19:07:23"
"2021-10-01T19:07:22"
MEMBER
null
## Describe the bug For some reason, I can't download `trivia_qa/unfilted`. A file seems to be missing... I am able to see it fine though the viewer tough... ## Steps to reproduce the bug ```python >>> from datasets import load_dataset >>> load_dataset("trivia_qa", "unfiltered") Downloading and preparing dataset trivia_qa/unfiltered (download: 3.07 GiB, generated: 27.23 GiB, post-processed: Unknown size, total: 30.30 GiB) to /gpfsscratch/rech/six/commun/datasets/trivia_qa/unfiltered/1.1.0/9977a5d6f72acfd92f587de052403e8138b43bb0d1ce595016c3baf7e14deba6... Traceback (most recent call last): File "/gpfswork/rech/six/commun/modules/datasets_modules/datasets/trivia_qa/9977a5d6f72acfd92f587de052403e8138b43bb0d1ce595016c3baf7e14deba6/trivia_qa.py", line 251, in _add_context with open(os.path.join(file_dir, fname), encoding="utf-8") as f: FileNotFoundError: [Errno 2] No such file or directory: '/gpfsscratch/rech/six/commun/datasets/downloads/extracted/9fcb7eddc6afd46fd074af3c5128931dfe4b548f933c925a23847faf4c1995ad/evidence/wikipedia/Peanuts.txt' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/load.py", line 852, in load_dataset use_auth_token=use_auth_token, File "/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/builder.py", line 616, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/builder.py", line 693, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/builder.py", line 1107, in _prepare_split disable=bool(logging.get_verbosity() == logging.NOTSET), File "/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/tqdm/std.py", line 1133, in __iter__ for obj in iterable: File "/gpfswork/rech/six/commun/modules/datasets_modules/datasets/trivia_qa/9977a5d6f72acfd92f587de052403e8138b43bb0d1ce595016c3baf7e14deba6/trivia_qa.py", line 303, in _generate_examples example = parse_example(article) File "/gpfswork/rech/six/commun/modules/datasets_modules/datasets/trivia_qa/9977a5d6f72acfd92f587de052403e8138b43bb0d1ce595016c3baf7e14deba6/trivia_qa.py", line 274, in parse_example _add_context(article.get("EntityPages", []), "WikiContext", wiki_dir), File "/gpfswork/rech/six/commun/modules/datasets_modules/datasets/trivia_qa/9977a5d6f72acfd92f587de052403e8138b43bb0d1ce595016c3baf7e14deba6/trivia_qa.py", line 253, in _add_context except (IOError, datasets.Value("errors").NotFoundError): File "<string>", line 5, in __init__ File "/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/features.py", line 265, in __post_init__ self.pa_type = string_to_arrow(self.dtype) File "/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/features.py", line 134, in string_to_arrow f"Neither {datasets_dtype} nor {datasets_dtype + '_'} seems to be a pyarrow data type. " ValueError: Neither errors nor errors_ seems to be a pyarrow data type. Please make sure to use a correct data type, see: https://arrow.apache.org/docs/python/api/datatypes.html#factory-functions ``` ## Expected results I am able to load another subset (`rc`), but unable to load. I am not sure why the try/except doesn't catch it... https://github.com/huggingface/datasets/blob/9675a5a1e7b99a86f9c250f6ea5fa5d1e6d5cc7d/datasets/trivia_qa/trivia_qa.py#L253 ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.11.0 - Platform: Linux-4.18.0-147.51.2.el8_1.x86_64-x86_64-with-redhat-8.1-Ootpa - Python version: 3.7.10 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2993/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2993/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2992
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2992/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2992/comments
https://api.github.com/repos/huggingface/datasets/issues/2992/events
https://github.com/huggingface/datasets/pull/2992
1,012,325,594
PR_kwDODunzps4sg4ZP
2,992
Fix f1 metric with None average
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2021-09-30T15:31:57"
"2021-10-01T14:17:39"
"2021-10-01T14:17:38"
MEMBER
null
Fix #2979.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2992/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2992/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2992", "html_url": "https://github.com/huggingface/datasets/pull/2992", "diff_url": "https://github.com/huggingface/datasets/pull/2992.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2992.patch", "merged_at": "2021-10-01T14:17:38" }
https://api.github.com/repos/huggingface/datasets/issues/2991
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2991/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2991/comments
https://api.github.com/repos/huggingface/datasets/issues/2991/events
https://github.com/huggingface/datasets/issues/2991
1,012,174,823
I_kwDODunzps48VI_n
2,991
add docmentation for the `Unix style pattern` matching feature that can be leverage for `data_files` into `load_dataset`
{ "login": "SaulLu", "id": 55560583, "node_id": "MDQ6VXNlcjU1NTYwNTgz", "avatar_url": "https://avatars.githubusercontent.com/u/55560583?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SaulLu", "html_url": "https://github.com/SaulLu", "followers_url": "https://api.github.com/users/SaulLu/followers", "following_url": "https://api.github.com/users/SaulLu/following{/other_user}", "gists_url": "https://api.github.com/users/SaulLu/gists{/gist_id}", "starred_url": "https://api.github.com/users/SaulLu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SaulLu/subscriptions", "organizations_url": "https://api.github.com/users/SaulLu/orgs", "repos_url": "https://api.github.com/users/SaulLu/repos", "events_url": "https://api.github.com/users/SaulLu/events{/privacy}", "received_events_url": "https://api.github.com/users/SaulLu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
0
"2021-09-30T13:22:01"
"2021-09-30T13:22:01"
null
NONE
null
Unless I'm mistaken, it seems that in the new documentation it is no longer mentioned that you can use Unix style pattern matching in the `data_files` argument of the `load_dataset` method. This feature was mentioned [here](https://huggingface.co/docs/datasets/loading_datasets.html#from-a-community-dataset-on-the-hugging-face-hub) in the previous documentation. I'd love to hear your opinion @lhoestq , @albertvillanova and @stevhliu
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2991/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2991/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2990
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2990/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2990/comments
https://api.github.com/repos/huggingface/datasets/issues/2990/events
https://github.com/huggingface/datasets/pull/2990
1,012,097,418
PR_kwDODunzps4sgLt5
2,990
Make Dataset.map accept list of np.array
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2021-09-30T12:08:54"
"2021-10-01T13:57:46"
"2021-10-01T13:57:46"
MEMBER
null
Fix #2987.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2990/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2990/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2990", "html_url": "https://github.com/huggingface/datasets/pull/2990", "diff_url": "https://github.com/huggingface/datasets/pull/2990.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2990.patch", "merged_at": "2021-10-01T13:57:45" }
https://api.github.com/repos/huggingface/datasets/issues/2989
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2989/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2989/comments
https://api.github.com/repos/huggingface/datasets/issues/2989/events
https://github.com/huggingface/datasets/pull/2989
1,011,220,375
PR_kwDODunzps4sdlt1
2,989
Add CommonLanguage
{ "login": "anton-l", "id": 26864830, "node_id": "MDQ6VXNlcjI2ODY0ODMw", "avatar_url": "https://avatars.githubusercontent.com/u/26864830?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anton-l", "html_url": "https://github.com/anton-l", "followers_url": "https://api.github.com/users/anton-l/followers", "following_url": "https://api.github.com/users/anton-l/following{/other_user}", "gists_url": "https://api.github.com/users/anton-l/gists{/gist_id}", "starred_url": "https://api.github.com/users/anton-l/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anton-l/subscriptions", "organizations_url": "https://api.github.com/users/anton-l/orgs", "repos_url": "https://api.github.com/users/anton-l/repos", "events_url": "https://api.github.com/users/anton-l/events{/privacy}", "received_events_url": "https://api.github.com/users/anton-l/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2021-09-29T17:21:30"
"2021-10-01T17:36:39"
"2021-10-01T17:00:03"
MEMBER
null
This PR adds the Common Language dataset (https://zenodo.org/record/5036977) The dataset is intended for language-identification speech classifiers and is already used by models on the Hub: * https://huggingface.co/speechbrain/lang-id-commonlanguage_ecapa * https://huggingface.co/anton-l/wav2vec2-base-langid cc @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2989/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2989/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2989", "html_url": "https://github.com/huggingface/datasets/pull/2989", "diff_url": "https://github.com/huggingface/datasets/pull/2989.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2989.patch", "merged_at": "2021-10-01T17:00:03" }
https://api.github.com/repos/huggingface/datasets/issues/2988
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2988/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2988/comments
https://api.github.com/repos/huggingface/datasets/issues/2988/events
https://github.com/huggingface/datasets/issues/2988
1,011,148,017
I_kwDODunzps48ROTx
2,988
IndexError: Invalid key: 14 is out of bounds for size 0
{ "login": "dorost1234", "id": 79165106, "node_id": "MDQ6VXNlcjc5MTY1MTA2", "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dorost1234", "html_url": "https://github.com/dorost1234", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "repos_url": "https://api.github.com/users/dorost1234/repos", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
6
"2021-09-29T16:04:24"
"2022-01-05T06:38:06"
null
NONE
null
## Describe the bug A clear and concise description of what the bug is. Hi. I am trying to implement stochastic weighted averaging optimizer with transformer library as described here https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/ , for this I am using a run_clm.py codes which is working fine before adding SWA optimizer, the moment I modify the model with `swa_model = AveragedModel(model)` in this script, I am getting the below error, since I am NOT touching the dataloader part, I am confused why this is occurring, I very much appreciate your opinion on this @lhoestq ## Steps to reproduce the bug ``` Traceback (most recent call last): File "run_clm.py", line 723, in <module> main() File "run_clm.py", line 669, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/user/dara/libs/anaconda3/envs/success/lib/python3.7/site-packages/transformers/trainer.py", line 1258, in train for step, inputs in enumerate(epoch_iterator): File "/user/dara/libs/anaconda3/envs/success/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 435, in __next__ data = self._next_data() File "/user/dara/libs/anaconda3/envs/success/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 475, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/user/dara/libs/anaconda3/envs/success/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/user/dara/libs/anaconda3/envs/success/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] File "/user/dara/libs/anaconda3/envs/success/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1530, in __getitem__ format_kwargs=self._format_kwargs, File "/user/dara/libs/anaconda3/envs/success/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1517, in _getitem pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None) File "/user/dara/libs/anaconda3/envs/success/lib/python3.7/site-packages/datasets/formatting/formatting.py", line 368, in query_table _check_valid_index_key(key, size) File "/user/dara/libs/anaconda3/envs/success/lib/python3.7/site-packages/datasets/formatting/formatting.py", line 311, in _check_valid_index_key raise IndexError(f"Invalid key: {key} is out of bounds for size {size}") IndexError: Invalid key: 14 is out of bounds for size 0 ``` ## Expected results not getting the index error ## Actual results Please see the above ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: datasets 1.12.1 - Platform: linux - Python version: 3.7.11 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2988/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2988/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2987
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2987/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2987/comments
https://api.github.com/repos/huggingface/datasets/issues/2987/events
https://github.com/huggingface/datasets/issues/2987
1,011,026,141
I_kwDODunzps48Qwjd
2,987
ArrowInvalid: Can only convert 1-dimensional array values
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
1
"2021-09-29T14:18:52"
"2021-10-01T13:57:45"
"2021-10-01T13:57:45"
NONE
null
## Describe the bug For the ViT and LayoutLMv2 demo notebooks in my [Transformers-Tutorials repo](https://github.com/NielsRogge/Transformers-Tutorials), people reported an ArrowInvalid issue after applying the following function to a Dataset: ``` def preprocess_data(examples): images = [Image.open(path).convert("RGB") for path in examples['image_path']] words = examples['words'] boxes = examples['bboxes'] word_labels = examples['ner_tags'] encoded_inputs = processor(images, words, boxes=boxes, word_labels=word_labels, padding="max_length", truncation=True) return encoded_inputs ``` ``` Full trace: --------------------------------------------------------------------------- ArrowInvalid Traceback (most recent call last) <ipython-input-8-0fc3efc6f0c2> in <module>() 27 28 train_dataset = datasets['train'].map(preprocess_data, batched=True, remove_columns=datasets['train'].column_names, ---> 29 features=features) 30 test_dataset = datasets['test'].map(preprocess_data, batched=True, remove_columns=datasets['test'].column_names, 31 features=features) 13 frames /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc) 1701 new_fingerprint=new_fingerprint, 1702 disable_tqdm=disable_tqdm, -> 1703 desc=desc, 1704 ) 1705 else: /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 183 } 184 # apply actual function --> 185 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 186 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 187 # re-apply format to the output /usr/local/lib/python3.7/dist-packages/datasets/fingerprint.py in wrapper(*args, **kwargs) 396 # Call actual function 397 --> 398 out = func(self, *args, **kwargs) 399 400 # Update fingerprint of in-place transforms + update in-place history of transforms /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only) 2063 writer.write_table(batch) 2064 else: -> 2065 writer.write_batch(batch) 2066 if update_data and writer is not None: 2067 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file /usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size) 409 typed_sequence = OptimizedTypedSequence(batch_examples[col], type=col_type, try_type=col_try_type, col=col) 410 typed_sequence_examples[col] = typed_sequence --> 411 pa_table = pa.Table.from_pydict(typed_sequence_examples) 412 self.write_table(pa_table, writer_batch_size) 413 /usr/local/lib/python3.7/dist-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_pydict() /usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.asarray() /usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.array() /usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib._handle_arrow_array_protocol() /usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in __arrow_array__(self, type) 106 storage = numpy_to_pyarrow_listarray(self.data, type=type.value_type) 107 else: --> 108 storage = pa.array(self.data, type.storage_dtype) 109 out = pa.ExtensionArray.from_storage(type, storage) 110 elif isinstance(self.data, np.ndarray): /usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.array() /usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array() /usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() /usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowInvalid: Can only convert 1-dimensional array values ``` It can be fixed by adding the following line: ```diff def preprocess_data(examples): images = [Image.open(path).convert("RGB") for path in examples['image_path']] words = examples['words'] boxes = examples['bboxes'] word_labels = examples['ner_tags'] encoded_inputs = processor(images, words, boxes=boxes, word_labels=word_labels, padding="max_length", truncation=True) + encoded_inputs["image"] = np.array(encoded_inputs["image"]) return encoded_inputs ``` However, would be great if this can be fixed within Datasets itself.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2987/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2987/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2986
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2986/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2986/comments
https://api.github.com/repos/huggingface/datasets/issues/2986/events
https://github.com/huggingface/datasets/pull/2986
1,010,792,783
PR_kwDODunzps4scSHR
2,986
Refac module factory + avoid etag requests for hub datasets
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
6
"2021-09-29T10:42:00"
"2021-10-11T11:05:53"
"2021-10-11T11:05:52"
MEMBER
null
## Refactor the module factory When trying to extend the `data_files` logic to avoid doing unnecessary ETag requests, I noticed that the module preparation mechanism needed a refactor: - the function was 600 lines long - it was not readable - it contained many different cases that made it complex to maintain - it was hard to properly test it - it was hard to extend without breaking anything The module preparation mechanism is in charge of taking the name of a dataset or a metric given by the user (ex: "squad", "accuracy", "lhoestq/test", "path/to/my/script.py", "path/to/my/data/directory", "json", "csv") and return a module (possibly downloaded from the Hub) that contains the dataset builder or the metric class to use. ### Implementation details I decided to separate all these use cases into different dataset/metric module factories. First, the metric module factories: - **CanonicalMetricModuleFactory**: "accuracy", "rouge", ... - **LocalMetricModuleFactory**: "path/to/my/metric.py" Then, the dataset module factories: - **CanonicalDatasetModuleFactory**: "squad", "glue", ... - **CommunityDatasetModuleFactoryWithScript**: "lhoestq/test" - **CommunityDatasetModuleFactoryWithoutScript**: "lhoestq/demo1" - **PackagedDatasetModuleFactory**: "json", "csv", ... - **LocalDatasetModuleFactoryWithScript**: "path/to/my/script.py" - **LocalDatasetModuleFactoryWithoutScript**: "path/to/my/data/directory" And finally, additional factories when users have no internet: - **CachedDatasetModuleFactory** - **CachedMetricModuleFactory** ### Breaking changes One thing is that I still don't know at what extent we want to keep backward compatibility for `prepare_module`. For now I just kept it (except I removed two parameters) just in case, but it's not used anywhere anymore. ## Avoid etag requests for hub datasets To do this I added a class `DataFilesDict` that can be hashed to define the cache directory of the dataset. It contains the usual data files formatted as `{"train": ["train.txt"]}` for example. But each list of file is a `DataFilesList` that also has a `origin_metadata` attribute that contains metadata about the origin of each file: - for URLs: it stores the ETags of the files - for local files: it stores the last modification data - for files from a Hugging Face repository on the Hub: it stores the pattern (`*`, `*.csv`, "train.txt", etc.) and the commit sha of the repository (so there're no ETag requests !) This way if any file changes, the hash of the `DataFilesDict` changes too ! You can instantiate a `DataFilesDict` by using patterns for local/remote files or files in a HF repository: - for local/remote files: `DataFilesDict.from_local_or_remote(patterns)` - for files in a HF repository: `DataFilesDict.from_hf_repo(patterns, dataset_info)` Fix #2859 ## TODO Fix the latest test: - [x] fix the call to dataset_info in offline mode (related to https://github.com/huggingface/huggingface_hub/issues/372) Add some more tests: - [x] test all the factories - [x] test the new data files logic Other: - [x] docstrings - [x] comments
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2986/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 2, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2986/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2986", "html_url": "https://github.com/huggingface/datasets/pull/2986", "diff_url": "https://github.com/huggingface/datasets/pull/2986.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2986.patch", "merged_at": "2021-10-11T11:05:51" }
https://api.github.com/repos/huggingface/datasets/issues/2985
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2985/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2985/comments
https://api.github.com/repos/huggingface/datasets/issues/2985/events
https://github.com/huggingface/datasets/pull/2985
1,010,500,433
PR_kwDODunzps4sbbbo
2,985
add new dataset kan_hope
{ "login": "adeepH", "id": 46108405, "node_id": "MDQ6VXNlcjQ2MTA4NDA1", "avatar_url": "https://avatars.githubusercontent.com/u/46108405?v=4", "gravatar_id": "", "url": "https://api.github.com/users/adeepH", "html_url": "https://github.com/adeepH", "followers_url": "https://api.github.com/users/adeepH/followers", "following_url": "https://api.github.com/users/adeepH/following{/other_user}", "gists_url": "https://api.github.com/users/adeepH/gists{/gist_id}", "starred_url": "https://api.github.com/users/adeepH/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/adeepH/subscriptions", "organizations_url": "https://api.github.com/users/adeepH/orgs", "repos_url": "https://api.github.com/users/adeepH/repos", "events_url": "https://api.github.com/users/adeepH/events{/privacy}", "received_events_url": "https://api.github.com/users/adeepH/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2021-09-29T05:20:28"
"2021-10-01T16:55:19"
"2021-10-01T16:55:19"
CONTRIBUTOR
null
## Adding a Dataset - **Name:** *KanHope* - **Description:** *A code-mixed English-Kannada dataset for Hope speech detection* - **Task:** *Binary Text Classification* - **Paper:** *https://arxiv.org/abs/2108.04616* - **Data:** *https://github.com/adeepH/kan_hope/tree/main/dataset* - **Motivation:** *The dataset is amongst the very few resources available for code-mixed low-resourced Dravidian languages of India*
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2985/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2985/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2985", "html_url": "https://github.com/huggingface/datasets/pull/2985", "diff_url": "https://github.com/huggingface/datasets/pull/2985.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2985.patch", "merged_at": "2021-10-01T16:55:19" }
https://api.github.com/repos/huggingface/datasets/issues/2984
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2984/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2984/comments
https://api.github.com/repos/huggingface/datasets/issues/2984/events
https://github.com/huggingface/datasets/issues/2984
1,010,484,326
I_kwDODunzps48OsRm
2,984
Exceeded maximum rows when reading large files
{ "login": "zijwang", "id": 25057983, "node_id": "MDQ6VXNlcjI1MDU3OTgz", "avatar_url": "https://avatars.githubusercontent.com/u/25057983?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijwang", "html_url": "https://github.com/zijwang", "followers_url": "https://api.github.com/users/zijwang/followers", "following_url": "https://api.github.com/users/zijwang/following{/other_user}", "gists_url": "https://api.github.com/users/zijwang/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijwang/subscriptions", "organizations_url": "https://api.github.com/users/zijwang/orgs", "repos_url": "https://api.github.com/users/zijwang/repos", "events_url": "https://api.github.com/users/zijwang/events{/privacy}", "received_events_url": "https://api.github.com/users/zijwang/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
1
"2021-09-29T04:49:22"
"2021-10-12T06:05:42"
"2021-10-12T06:05:42"
NONE
null
## Describe the bug A clear and concise description of what the bug is. When using `load_dataset` with json files, if the files are too large, there will be "Exceeded maximum rows" error. ## Steps to reproduce the bug ```python dataset = load_dataset('json', data_files=data_files) # data files have 3M rows in a single file ``` ## Expected results No error ## Actual results ``` ~/anaconda3/envs/python/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py in _generate_tables(self, files) 134 with open(file, encoding="utf-8") as f: --> 135 dataset = json.load(f) 136 except json.JSONDecodeError: ~/anaconda3/envs/python/lib/python3.9/json/__init__.py in load(fp, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw) 292 """ --> 293 return loads(fp.read(), 294 cls=cls, object_hook=object_hook, ~/anaconda3/envs/python/lib/python3.9/json/__init__.py in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw) 345 parse_constant is None and object_pairs_hook is None and not kw): --> 346 return _default_decoder.decode(s) 347 if cls is None: ~/anaconda3/envs/python/lib/python3.9/json/decoder.py in decode(self, s, _w) 339 if end != len(s): --> 340 raise JSONDecodeError("Extra data", s, end) 341 return obj JSONDecodeError: Extra data: line 2 column 1 (char 20321) During handling of the above exception, another exception occurred: ArrowInvalid Traceback (most recent call last) <ipython-input-20-ab3718a6482f> in <module> ----> 1 dataset = load_dataset('json', data_files=data_files) ~/anaconda3/envs/python/lib/python3.9/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs) 841 842 # Download and prepare data --> 843 builder_instance.download_and_prepare( 844 download_config=download_config, 845 download_mode=download_mode, ~/anaconda3/envs/python/lib/python3.9/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 606 logger.warning("HF google storage unreachable. Downloading and preparing it from source") 607 if not downloaded_from_gcs: --> 608 self._download_and_prepare( 609 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 610 ) ~/anaconda3/envs/python/lib/python3.9/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 684 try: 685 # Prepare split will record examples associated to the split --> 686 self._prepare_split(split_generator, **prepare_split_kwargs) 687 except OSError as e: 688 raise OSError( ~/anaconda3/envs/python/lib/python3.9/site-packages/datasets/builder.py in _prepare_split(self, split_generator) 1153 generator = self._generate_tables(**split_generator.gen_kwargs) 1154 with ArrowWriter(features=self.info.features, path=fpath) as writer: -> 1155 for key, table in utils.tqdm( 1156 generator, unit=" tables", leave=False, disable=bool(logging.get_verbosity() == logging.NOTSET) 1157 ): ~/anaconda3/envs/python/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py in _generate_tables(self, files) 135 dataset = json.load(f) 136 except json.JSONDecodeError: --> 137 raise e 138 raise ValueError( 139 f"Not able to read records in the JSON file at {file}. " ~/anaconda3/envs/python/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py in _generate_tables(self, files) 114 while True: 115 try: --> 116 pa_table = paj.read_json( 117 BytesIO(batch), read_options=paj.ReadOptions(block_size=block_size) 118 ) ~/anaconda3/envs/python/lib/python3.9/site-packages/pyarrow/_json.pyx in pyarrow._json.read_json() ~/anaconda3/envs/python/lib/python3.9/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() ~/anaconda3/envs/python/lib/python3.9/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowInvalid: Exceeded maximum rows ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: Linux - Python version: 3.9 - PyArrow version: 4.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2984/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2984/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2983
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2983/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2983/comments
https://api.github.com/repos/huggingface/datasets/issues/2983/events
https://github.com/huggingface/datasets/pull/2983
1,010,263,058
PR_kwDODunzps4saw_v
2,983
added SwissJudgmentPrediction dataset
{ "login": "JoelNiklaus", "id": 3775944, "node_id": "MDQ6VXNlcjM3NzU5NDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/3775944?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JoelNiklaus", "html_url": "https://github.com/JoelNiklaus", "followers_url": "https://api.github.com/users/JoelNiklaus/followers", "following_url": "https://api.github.com/users/JoelNiklaus/following{/other_user}", "gists_url": "https://api.github.com/users/JoelNiklaus/gists{/gist_id}", "starred_url": "https://api.github.com/users/JoelNiklaus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JoelNiklaus/subscriptions", "organizations_url": "https://api.github.com/users/JoelNiklaus/orgs", "repos_url": "https://api.github.com/users/JoelNiklaus/repos", "events_url": "https://api.github.com/users/JoelNiklaus/events{/privacy}", "received_events_url": "https://api.github.com/users/JoelNiklaus/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2021-09-28T22:17:56"
"2021-10-01T16:03:05"
"2021-10-01T16:03:05"
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2983/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2983/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2983", "html_url": "https://github.com/huggingface/datasets/pull/2983", "diff_url": "https://github.com/huggingface/datasets/pull/2983.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2983.patch", "merged_at": "2021-10-01T16:03:05" }
https://api.github.com/repos/huggingface/datasets/issues/2982
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2982/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2982/comments
https://api.github.com/repos/huggingface/datasets/issues/2982/events
https://github.com/huggingface/datasets/pull/2982
1,010,118,418
PR_kwDODunzps4saVLh
2,982
Add the Math Aptitude Test of Heuristics dataset.
{ "login": "hacobe", "id": 91226467, "node_id": "MDQ6VXNlcjkxMjI2NDY3", "avatar_url": "https://avatars.githubusercontent.com/u/91226467?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hacobe", "html_url": "https://github.com/hacobe", "followers_url": "https://api.github.com/users/hacobe/followers", "following_url": "https://api.github.com/users/hacobe/following{/other_user}", "gists_url": "https://api.github.com/users/hacobe/gists{/gist_id}", "starred_url": "https://api.github.com/users/hacobe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hacobe/subscriptions", "organizations_url": "https://api.github.com/users/hacobe/orgs", "repos_url": "https://api.github.com/users/hacobe/repos", "events_url": "https://api.github.com/users/hacobe/events{/privacy}", "received_events_url": "https://api.github.com/users/hacobe/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2021-09-28T19:18:37"
"2021-10-01T19:51:23"
"2021-10-01T12:21:00"
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2982/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2982/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2982", "html_url": "https://github.com/huggingface/datasets/pull/2982", "diff_url": "https://github.com/huggingface/datasets/pull/2982.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2982.patch", "merged_at": "2021-10-01T12:21:00" }
https://api.github.com/repos/huggingface/datasets/issues/2981
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2981/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2981/comments
https://api.github.com/repos/huggingface/datasets/issues/2981/events
https://github.com/huggingface/datasets/pull/2981
1,009,969,310
PR_kwDODunzps4sZ4ke
2,981
add wit dataset
{ "login": "nateraw", "id": 32437151, "node_id": "MDQ6VXNlcjMyNDM3MTUx", "avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nateraw", "html_url": "https://github.com/nateraw", "followers_url": "https://api.github.com/users/nateraw/followers", "following_url": "https://api.github.com/users/nateraw/following{/other_user}", "gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}", "starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nateraw/subscriptions", "organizations_url": "https://api.github.com/users/nateraw/orgs", "repos_url": "https://api.github.com/users/nateraw/repos", "events_url": "https://api.github.com/users/nateraw/events{/privacy}", "received_events_url": "https://api.github.com/users/nateraw/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
4
"2021-09-28T16:34:49"
"2022-01-05T13:08:52"
null
CONTRIBUTOR
null
Resolves #2902 based on conversation there - would also close #2810. Open to suggestions/help 😀 CC @hassiahk @lhoestq @yjernite
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2981/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 2 }
https://api.github.com/repos/huggingface/datasets/issues/2981/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2981", "html_url": "https://github.com/huggingface/datasets/pull/2981", "diff_url": "https://github.com/huggingface/datasets/pull/2981.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2981.patch", "merged_at": null }
https://api.github.com/repos/huggingface/datasets/issues/2980
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2980/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2980/comments
https://api.github.com/repos/huggingface/datasets/issues/2980/events
https://github.com/huggingface/datasets/issues/2980
1,009,873,482
I_kwDODunzps48MXJK
2,980
OpenSLR 25: ASR data for Amharic, Swahili and Wolof
{ "login": "cdleong", "id": 4109253, "node_id": "MDQ6VXNlcjQxMDkyNTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4109253?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cdleong", "html_url": "https://github.com/cdleong", "followers_url": "https://api.github.com/users/cdleong/followers", "following_url": "https://api.github.com/users/cdleong/following{/other_user}", "gists_url": "https://api.github.com/users/cdleong/gists{/gist_id}", "starred_url": "https://api.github.com/users/cdleong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cdleong/subscriptions", "organizations_url": "https://api.github.com/users/cdleong/orgs", "repos_url": "https://api.github.com/users/cdleong/repos", "events_url": "https://api.github.com/users/cdleong/events{/privacy}", "received_events_url": "https://api.github.com/users/cdleong/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
open
false
null
[]
null
3
"2021-09-28T15:04:36"
"2021-09-29T17:25:14"
null
CONTRIBUTOR
null
## Adding a Dataset - **Name:** *SLR25* - **Description:** *Subset 25 from OpenSLR. Other subsets have been added to https://huggingface.co/datasets/openslr, 25 covers Amharic, Swahili and Wolof data* - **Paper:** *https://www.openslr.org/25/ has citations for each of the three subsubsets. * - **Data:** *Currently the three links to the .tar.bz2 files can be found a thttps://www.openslr.org/25/* - **Motivation:** *Increase ASR data for underrepresented African languages. Also, other subsets of OpenSLR speech recognition have been uploaded, so this would be easy.* https://github.com/huggingface/datasets/blob/master/datasets/openslr/openslr.py already has been created for various other OpenSLR subsets, this should be relatively straightforward to do.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2980/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2980/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2979
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2979/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2979/comments
https://api.github.com/repos/huggingface/datasets/issues/2979/events
https://github.com/huggingface/datasets/issues/2979
1,009,634,147
I_kwDODunzps48Lctj
2,979
ValueError when computing f1 metric with average None
{ "login": "asofiaoliveira", "id": 74454835, "node_id": "MDQ6VXNlcjc0NDU0ODM1", "avatar_url": "https://avatars.githubusercontent.com/u/74454835?v=4", "gravatar_id": "", "url": "https://api.github.com/users/asofiaoliveira", "html_url": "https://github.com/asofiaoliveira", "followers_url": "https://api.github.com/users/asofiaoliveira/followers", "following_url": "https://api.github.com/users/asofiaoliveira/following{/other_user}", "gists_url": "https://api.github.com/users/asofiaoliveira/gists{/gist_id}", "starred_url": "https://api.github.com/users/asofiaoliveira/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/asofiaoliveira/subscriptions", "organizations_url": "https://api.github.com/users/asofiaoliveira/orgs", "repos_url": "https://api.github.com/users/asofiaoliveira/repos", "events_url": "https://api.github.com/users/asofiaoliveira/events{/privacy}", "received_events_url": "https://api.github.com/users/asofiaoliveira/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
1
"2021-09-28T11:34:53"
"2021-10-01T14:17:38"
"2021-10-01T14:17:38"
NONE
null
## Describe the bug When I try to compute the f1 score for each class in a multiclass classification problem, I get a ValueError. The same happens with recall and precision. I traced the error to the `.item()` in these scripts, which is probably there for the other averages. E.g. from f1.py: ```python return { "f1": f1_score( references, predictions, labels=labels, pos_label=pos_label, average=average, sample_weight=sample_weight, ).item(), } ``` Since the result is an array with more than one item, the `.item()` throws the error. I didn't submit a PR because this might be needed for the other averages, I'm not very familiar with the library ## Steps to reproduce the bug ```python from datasets import load_metric metric = load_metric("f1") metric.add_batch(predictions=[2,34,1,34,1,2,3], references=[23,52,1,3,523,5,8]) metric.compute(average=None) ``` ## Expected results `array([0.66666667, 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ])` ## Actual results ValueError: can only convert an array of size 1 to a Python scalar ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.12.1 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.9.5 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2979/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2979/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2978
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2978/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2978/comments
https://api.github.com/repos/huggingface/datasets/issues/2978/events
https://github.com/huggingface/datasets/issues/2978
1,009,521,419
I_kwDODunzps48LBML
2,978
Run CI tests against non-production server
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
2
"2021-09-28T09:41:26"
"2021-09-28T15:23:50"
null
MEMBER
null
Currently, the CI test suite performs requests to the HF production server. As discussed with @elishowk, we should refactor our tests to use the HF staging server instead, like `huggingface_hub` and `transformers`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2978/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2978/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2977
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2977/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2977/comments
https://api.github.com/repos/huggingface/datasets/issues/2977/events
https://github.com/huggingface/datasets/issues/2977
1,009,378,692
I_kwDODunzps48KeWE
2,977
Impossible to load compressed csv
{ "login": "Valahaar", "id": 19476123, "node_id": "MDQ6VXNlcjE5NDc2MTIz", "avatar_url": "https://avatars.githubusercontent.com/u/19476123?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Valahaar", "html_url": "https://github.com/Valahaar", "followers_url": "https://api.github.com/users/Valahaar/followers", "following_url": "https://api.github.com/users/Valahaar/following{/other_user}", "gists_url": "https://api.github.com/users/Valahaar/gists{/gist_id}", "starred_url": "https://api.github.com/users/Valahaar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Valahaar/subscriptions", "organizations_url": "https://api.github.com/users/Valahaar/orgs", "repos_url": "https://api.github.com/users/Valahaar/repos", "events_url": "https://api.github.com/users/Valahaar/events{/privacy}", "received_events_url": "https://api.github.com/users/Valahaar/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
1
"2021-09-28T07:18:54"
"2021-10-01T15:53:16"
"2021-10-01T15:53:15"
CONTRIBUTOR
null
## Describe the bug It is not possible to load from a compressed csv anymore. ## Steps to reproduce the bug ```python load_dataset('csv', data_files=['/path/to/csv.bz2']) ``` ## Problem and possible solution This used to work, but the commit that broke it is [this one](https://github.com/huggingface/datasets/commit/ad489d4597381fc2d12c77841642cbeaecf7a2e0#diff-6f60f8d0552b75be8b3bfd09994480fd60dcd4e7eb08d02f721218c3acdd2782). `pandas` usually gets the compression information from the filename itself (which was previously directly passed). Now, since it gets a file descriptor, it might be good to auto-infer the compression or let the user pass the `compression` kwarg to `load_dataset` (or maybe warn the user if the file ends with a commonly known compression scheme?). ## Environment info - `datasets` version: 1.10.0 (and over) - Platform: Linux-5.8.0-45-generic-x86_64-with-glibc2.17 - Python version: 3.8.10 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2977/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2977/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2976
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2976/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2976/comments
https://api.github.com/repos/huggingface/datasets/issues/2976/events
https://github.com/huggingface/datasets/issues/2976
1,008,647,889
I_kwDODunzps48Hr7R
2,976
Can't load dataset
{ "login": "mskovalova", "id": 77006774, "node_id": "MDQ6VXNlcjc3MDA2Nzc0", "avatar_url": "https://avatars.githubusercontent.com/u/77006774?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mskovalova", "html_url": "https://github.com/mskovalova", "followers_url": "https://api.github.com/users/mskovalova/followers", "following_url": "https://api.github.com/users/mskovalova/following{/other_user}", "gists_url": "https://api.github.com/users/mskovalova/gists{/gist_id}", "starred_url": "https://api.github.com/users/mskovalova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mskovalova/subscriptions", "organizations_url": "https://api.github.com/users/mskovalova/orgs", "repos_url": "https://api.github.com/users/mskovalova/repos", "events_url": "https://api.github.com/users/mskovalova/events{/privacy}", "received_events_url": "https://api.github.com/users/mskovalova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
1
"2021-09-27T21:38:14"
"2021-09-28T06:53:01"
"2021-09-28T06:53:01"
NONE
null
I'm trying to load a wikitext dataset ``` from datasets import load_dataset raw_datasets = load_dataset("wikitext") ``` ValueError: Config name is missing. Please pick one among the available configs: ['wikitext-103-raw-v1', 'wikitext-2-raw-v1', 'wikitext-103-v1', 'wikitext-2-v1'] Example of usage: `load_dataset('wikitext', 'wikitext-103-raw-v1')`. If I try ``` from datasets import load_dataset raw_datasets = load_dataset("wikitext-2-v1") ``` FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.12.1/datasets/wikitext-2-v1/wikitext-2-v1.py #### Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.12.1 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic (colab) - Python version: 3.7.12 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2976/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2976/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2975
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2975/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2975/comments
https://api.github.com/repos/huggingface/datasets/issues/2975/events
https://github.com/huggingface/datasets/pull/2975
1,008,444,654
PR_kwDODunzps4sVAOt
2,975
ignore dummy folder and dataset_infos.json
{ "login": "Ishan-Kumar2", "id": 46553104, "node_id": "MDQ6VXNlcjQ2NTUzMTA0", "avatar_url": "https://avatars.githubusercontent.com/u/46553104?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ishan-Kumar2", "html_url": "https://github.com/Ishan-Kumar2", "followers_url": "https://api.github.com/users/Ishan-Kumar2/followers", "following_url": "https://api.github.com/users/Ishan-Kumar2/following{/other_user}", "gists_url": "https://api.github.com/users/Ishan-Kumar2/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ishan-Kumar2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ishan-Kumar2/subscriptions", "organizations_url": "https://api.github.com/users/Ishan-Kumar2/orgs", "repos_url": "https://api.github.com/users/Ishan-Kumar2/repos", "events_url": "https://api.github.com/users/Ishan-Kumar2/events{/privacy}", "received_events_url": "https://api.github.com/users/Ishan-Kumar2/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2021-09-27T18:09:03"
"2021-09-29T09:45:38"
"2021-09-29T09:05:38"
CONTRIBUTOR
null
Fixes #2877 Added the `dataset_infos.json` to the ignored files list and also added check to ignore files which have parent directory as `dummy`. Let me know if it is correct. Thanks :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2975/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2975/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2975", "html_url": "https://github.com/huggingface/datasets/pull/2975", "diff_url": "https://github.com/huggingface/datasets/pull/2975.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2975.patch", "merged_at": "2021-09-29T09:05:38" }
https://api.github.com/repos/huggingface/datasets/issues/2974
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2974/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2974/comments
https://api.github.com/repos/huggingface/datasets/issues/2974/events
https://github.com/huggingface/datasets/pull/2974
1,008,247,787
PR_kwDODunzps4sUZCX
2,974
Actually disable dummy labels by default
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2021-09-27T14:50:20"
"2021-09-29T09:04:42"
"2021-09-29T09:04:41"
MEMBER
null
So I might have just changed the docstring instead of the actual default argument value and not realized. @lhoestq I'm sorry >.>
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2974/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2974/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2974", "html_url": "https://github.com/huggingface/datasets/pull/2974", "diff_url": "https://github.com/huggingface/datasets/pull/2974.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2974.patch", "merged_at": "2021-09-29T09:04:41" }
https://api.github.com/repos/huggingface/datasets/issues/2973
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2973/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2973/comments
https://api.github.com/repos/huggingface/datasets/issues/2973/events
https://github.com/huggingface/datasets/pull/2973
1,007,894,592
PR_kwDODunzps4sTRvk
2,973
Fix JSON metadata of masakhaner dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2021-09-27T09:09:08"
"2021-09-27T12:59:59"
"2021-09-27T12:59:59"
MEMBER
null
Fix #2971.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2973/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2973/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2973", "html_url": "https://github.com/huggingface/datasets/pull/2973", "diff_url": "https://github.com/huggingface/datasets/pull/2973.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2973.patch", "merged_at": "2021-09-27T12:59:58" }
https://api.github.com/repos/huggingface/datasets/issues/2972
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2972/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2972/comments
https://api.github.com/repos/huggingface/datasets/issues/2972/events
https://github.com/huggingface/datasets/issues/2972
1,007,808,714
I_kwDODunzps48EfDK
2,972
OSError: Not enough disk space.
{ "login": "qqaatw", "id": 24835382, "node_id": "MDQ6VXNlcjI0ODM1Mzgy", "avatar_url": "https://avatars.githubusercontent.com/u/24835382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qqaatw", "html_url": "https://github.com/qqaatw", "followers_url": "https://api.github.com/users/qqaatw/followers", "following_url": "https://api.github.com/users/qqaatw/following{/other_user}", "gists_url": "https://api.github.com/users/qqaatw/gists{/gist_id}", "starred_url": "https://api.github.com/users/qqaatw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qqaatw/subscriptions", "organizations_url": "https://api.github.com/users/qqaatw/orgs", "repos_url": "https://api.github.com/users/qqaatw/repos", "events_url": "https://api.github.com/users/qqaatw/events{/privacy}", "received_events_url": "https://api.github.com/users/qqaatw/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
4
"2021-09-27T07:41:22"
"2021-09-28T06:45:27"
"2021-09-28T06:43:15"
CONTRIBUTOR
null
## Describe the bug I'm trying to download `natural_questions` dataset from the Internet, and I've specified the cache_dir which locates in a mounted disk and has enough disk space. However, even though the space is enough, the disk space checking function still reports the space of root `/` disk having no enough space. The file system structure is like below. The root `/` has `115G` disk space available, and the `sda1` is mounted to `/mnt`, which has `1.2T` disk space available: ``` / /mnt/sda1/path/to/args.dataset_cache_dir ``` ## Steps to reproduce the bug ```python dataset_config = DownloadConfig( cache_dir=os.path.abspath(args.dataset_cache_dir), resume_download=True, ) dataset = load_dataset("natural_questions", download_config=dataset_config) ``` ## Expected results Can download the dataset without an error. ## Actual results The following error raised: ``` OSError: Not enough disk space. Needed: 134.92 GiB (download: 41.97 GiB, generated: 92.95 GiB, post-processed: Unknown size) ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.9.0 - Platform: Ubuntu 18.04 - Python version: 3.8.10 - PyArrow version:
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2972/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2972/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2971
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2971/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2971/comments
https://api.github.com/repos/huggingface/datasets/issues/2971/events
https://github.com/huggingface/datasets/issues/2971
1,007,696,522
I_kwDODunzps48EDqK
2,971
masakhaner dataset load problem
{ "login": "ontocord", "id": 8900094, "node_id": "MDQ6VXNlcjg5MDAwOTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/8900094?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ontocord", "html_url": "https://github.com/ontocord", "followers_url": "https://api.github.com/users/ontocord/followers", "following_url": "https://api.github.com/users/ontocord/following{/other_user}", "gists_url": "https://api.github.com/users/ontocord/gists{/gist_id}", "starred_url": "https://api.github.com/users/ontocord/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ontocord/subscriptions", "organizations_url": "https://api.github.com/users/ontocord/orgs", "repos_url": "https://api.github.com/users/ontocord/repos", "events_url": "https://api.github.com/users/ontocord/events{/privacy}", "received_events_url": "https://api.github.com/users/ontocord/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
1
"2021-09-27T04:59:07"
"2021-09-27T12:59:59"
"2021-09-27T12:59:59"
CONTRIBUTOR
null
## Describe the bug Masakhaner dataset is not loading ## Steps to reproduce the bug ``` from datasets import load_dataset dataset = load_dataset("masakhaner",'amh') ``` ## Expected results Expected the return of a dataset ## Actual results ``` NonMatchingSplitsSizesError Traceback (most recent call last) <ipython-input-3-a6abc1161d4c> in <module>() 1 from datasets import load_dataset 2 ----> 3 dataset = load_dataset("masakhaner",'amh') 3 frames /usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py in verify_splits(expected_splits, recorded_splits) 72 ] 73 if len(bad_splits) > 0: ---> 74 raise NonMatchingSplitsSizesError(str(bad_splits)) 75 logger.info("All the splits matched successfully.") 76 NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=639927, num_examples=1751, dataset_name='masakhaner'), 'recorded': SplitInfo(name='train', num_bytes=639911, num_examples=1750, dataset_name='masakhaner')}, {'expected': SplitInfo(name='validation', num_bytes=92768, num_examples=251, dataset_name='masakhaner'), 'recorded': SplitInfo(name='validation', num_bytes=92753, num_examples=250, dataset_name='masakhaner')}, {'expected': SplitInfo(name='test', num_bytes=184286, num_examples=501, dataset_name='masakhaner'), 'recorded': SplitInfo(name='test', num_bytes=184271, num_examples=500, dataset_name='masakhaner')}] ``` ## Environment info Google Colab
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2971/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2971/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2970
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2970/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2970/comments
https://api.github.com/repos/huggingface/datasets/issues/2970/events
https://github.com/huggingface/datasets/issues/2970
1,007,340,089
I_kwDODunzps48Cso5
2,970
Magnet’s
{ "login": "rcacho172", "id": 90449239, "node_id": "MDQ6VXNlcjkwNDQ5MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/90449239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rcacho172", "html_url": "https://github.com/rcacho172", "followers_url": "https://api.github.com/users/rcacho172/followers", "following_url": "https://api.github.com/users/rcacho172/following{/other_user}", "gists_url": "https://api.github.com/users/rcacho172/gists{/gist_id}", "starred_url": "https://api.github.com/users/rcacho172/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rcacho172/subscriptions", "organizations_url": "https://api.github.com/users/rcacho172/orgs", "repos_url": "https://api.github.com/users/rcacho172/repos", "events_url": "https://api.github.com/users/rcacho172/events{/privacy}", "received_events_url": "https://api.github.com/users/rcacho172/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
0
"2021-09-26T09:50:29"
"2021-09-26T10:38:59"
"2021-09-26T10:38:59"
NONE
null
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2970/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2970/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2969
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2969/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2969/comments
https://api.github.com/repos/huggingface/datasets/issues/2969/events
https://github.com/huggingface/datasets/issues/2969
1,007,217,867
I_kwDODunzps48COzL
2,969
medical-dialog error
{ "login": "smeyerhot", "id": 43877130, "node_id": "MDQ6VXNlcjQzODc3MTMw", "avatar_url": "https://avatars.githubusercontent.com/u/43877130?v=4", "gravatar_id": "", "url": "https://api.github.com/users/smeyerhot", "html_url": "https://github.com/smeyerhot", "followers_url": "https://api.github.com/users/smeyerhot/followers", "following_url": "https://api.github.com/users/smeyerhot/following{/other_user}", "gists_url": "https://api.github.com/users/smeyerhot/gists{/gist_id}", "starred_url": "https://api.github.com/users/smeyerhot/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/smeyerhot/subscriptions", "organizations_url": "https://api.github.com/users/smeyerhot/orgs", "repos_url": "https://api.github.com/users/smeyerhot/repos", "events_url": "https://api.github.com/users/smeyerhot/events{/privacy}", "received_events_url": "https://api.github.com/users/smeyerhot/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
1
"2021-09-25T23:08:44"
"2021-10-11T07:46:42"
"2021-10-11T07:46:42"
NONE
null
## Describe the bug A clear and concise description of what the bug is. When I attempt to download the huggingface datatset medical_dialog it errors out midway through ## Steps to reproduce the bug ```python raw_datasets = load_dataset("medical_dialog", "en", split="train", download_mode="force_redownload", data_dir="./Medical-Dialogue-Dataset-English") ``` ## Expected results A clear and concise description of the expected results. No error ## Actual results ``` 3 frames /usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py in verify_splits(expected_splits, recorded_splits) 72 ] 73 if len(bad_splits) > 0: ---> 74 raise NonMatchingSplitsSizesError(str(bad_splits)) 75 logger.info("All the splits matched successfully.") 76 NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='medical_dialog'), 'recorded': SplitInfo(name='train', num_bytes=295097913, num_examples=229674, dataset_name='medical_dialog')}] ``` Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.21.1 - Platform: colab - Python version: colab 3.7 - PyArrow version: N/A
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2969/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2969/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2968
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2968/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2968/comments
https://api.github.com/repos/huggingface/datasets/issues/2968/events
https://github.com/huggingface/datasets/issues/2968
1,007,209,488
I_kwDODunzps48CMwQ
2,968
`DatasetDict` cannot be exported to parquet if the splits have different features
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
9
"2021-09-25T22:18:39"
"2021-10-07T22:47:42"
"2021-10-07T22:47:26"
MEMBER
null
## Describe the bug I'm trying to use parquet as a means of serialization for both `Dataset` and `DatasetDict` objects. Using `to_parquet` alongside `from_parquet` or `load_dataset` for a `Dataset` works perfectly. For `DatasetDict`, I use `to_parquet` on each split to save the parquet files in individual folders representing individual splits. This works too, as long as the splits have identical features. If a split has different features to neighboring splits, then loading the dataset will fail: a single schema is used to load both splits, resulting in a failure to load the second parquet file. ## Steps to reproduce the bug The following works as expected: ```python from datasets import load_dataset ds = load_dataset("lhoestq/custom_squad") ds['train'].to_parquet("./ds/train/split.parquet") ds['validation'].to_parquet("./ds/validation/split.parquet") brand_new_dataset = load_dataset("ds") ``` Modifying a single split to add a new feature ends up in a crash: ```python from datasets import load_dataset ds = load_dataset("lhoestq/custom_squad") def identical_answers(e): e['identical_answers'] = len(set(e['answers']['text'])) == 1 return e ds['validation'] = ds['validation'].map(identical_answers) ds['train'].to_parquet("./ds/train/split.parquet") ds['validation'].to_parquet("./ds/validation/split.parquet") brand_new_dataset = load_dataset("ds") ``` ``` File "/home/lysandre/.config/JetBrains/PyCharm2021.2/scratches/datasets/upload_dataset.py", line 26, in <module> brand_new_dataset = load_dataset("ds") File "/home/lysandre/Workspaces/Python/datasets/src/datasets/load.py", line 1151, in load_dataset builder_instance.download_and_prepare( File "/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py", line 642, in download_and_prepare self._download_and_prepare( File "/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py", line 732, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py", line 1194, in _prepare_split writer.write_table(table) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_writer.py", line 428, in write_table pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_writer.py", line 428, in <listcomp> pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema) File "pyarrow/table.pxi", line 1257, in pyarrow.lib.Table.__getitem__ File "pyarrow/table.pxi", line 1833, in pyarrow.lib.Table.column File "pyarrow/table.pxi", line 1808, in pyarrow.lib.Table._ensure_integer_index KeyError: 'Field "identical_answers" does not exist in table schema' ``` It does work, however, to use the `save_to_disk` and `load_from_disk` methods: ```py from datasets import load_from_disk ds = load_dataset("lhoestq/custom_squad") def identical_answers(e): e['identical_answers'] = len(set(e['answers']['text'])) == 1 return e ds['validation'] = ds['validation'].map(identical_answers) ds.save_to_disk("local_path") brand_new_dataset = load_from_disk("local_path") ``` ## Expected results The saving works correctly - but the loading fails. I would expect either an error when saving or an error-less instantiation of the dataset through the parquet files. If it's helpful, I've traced a possible patch to the `write_table` method here: https://github.com/huggingface/datasets/blob/26ff41aa3a642e46489db9e95be1e9a8c4e64bea/src/datasets/arrow_writer.py#L424-L425 The writer is built only if the parquet writer is `None`, but I expect we would want to build a new writer as the table schema has changed. Furthermore, it relies on having the property `update_features` set to `True` in order to update the features: https://github.com/huggingface/datasets/blob/26ff41aa3a642e46489db9e95be1e9a8c4e64bea/src/datasets/arrow_writer.py#L254-L255 but the `ArrowWriter` is instantiated without that option in the `_prepare_split` method of the `ArrowBasedBuilder`: https://github.com/huggingface/datasets/blob/26ff41aa3a642e46489db9e95be1e9a8c4e64bea/src/datasets/builder.py#L1190 Updating these two parts to recreate a schema on each split results in an error that is, unfortunately, out of my expertise: ``` File "/home/lysandre/.config/JetBrains/PyCharm2021.2/scratches/datasets/upload_dataset.py", line 27, in <module> brand_new_dataset = load_dataset("ds") File "/home/lysandre/Workspaces/Python/datasets/src/datasets/load.py", line 1163, in load_dataset ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py", line 819, in as_dataset datasets = utils.map_nested( File "/home/lysandre/Workspaces/Python/datasets/src/datasets/utils/py_utils.py", line 207, in map_nested mapped = [ File "/home/lysandre/Workspaces/Python/datasets/src/datasets/utils/py_utils.py", line 208, in <listcomp> _single_map_nested((function, obj, types, None, True)) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/utils/py_utils.py", line 143, in _single_map_nested return function(data_struct) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py", line 850, in _build_single_dataset ds = self._as_dataset( File "/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py", line 920, in _as_dataset dataset_kwargs = ArrowReader(self._cache_dir, self.info).read( File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py", line 217, in read return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py", line 238, in read_files pa_table = self._read_files(files, in_memory=in_memory) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py", line 173, in _read_files pa_table: Table = self._get_table_from_filename(f_dict, in_memory=in_memory) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py", line 308, in _get_table_from_filename table = ArrowReader.read_table(filename, in_memory=in_memory) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py", line 327, in read_table return table_cls.from_file(filename) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/table.py", line 458, in from_file table = _memory_mapped_arrow_table_from_file(filename) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/table.py", line 45, in _memory_mapped_arrow_table_from_file pa_table = opened_stream.read_all() File "pyarrow/ipc.pxi", line 563, in pyarrow.lib.RecordBatchReader.read_all File "pyarrow/error.pxi", line 114, in pyarrow.lib.check_status OSError: Header-type of flatbuffer-encoded Message is not RecordBatch. ``` ## Environment info - `datasets` version: 1.12.2.dev0 - Platform: Linux-5.14.7-arch1-1-x86_64-with-glibc2.33 - Python version: 3.9.7 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2968/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2968/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2967
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2967/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2967/comments
https://api.github.com/repos/huggingface/datasets/issues/2967/events
https://github.com/huggingface/datasets/issues/2967
1,007,194,837
I_kwDODunzps48CJLV
2,967
Adding vision-and-language datasets (e.g., VQA, VCR) to Datasets
{ "login": "WadeYin9712", "id": 42200725, "node_id": "MDQ6VXNlcjQyMjAwNzI1", "avatar_url": "https://avatars.githubusercontent.com/u/42200725?v=4", "gravatar_id": "", "url": "https://api.github.com/users/WadeYin9712", "html_url": "https://github.com/WadeYin9712", "followers_url": "https://api.github.com/users/WadeYin9712/followers", "following_url": "https://api.github.com/users/WadeYin9712/following{/other_user}", "gists_url": "https://api.github.com/users/WadeYin9712/gists{/gist_id}", "starred_url": "https://api.github.com/users/WadeYin9712/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/WadeYin9712/subscriptions", "organizations_url": "https://api.github.com/users/WadeYin9712/orgs", "repos_url": "https://api.github.com/users/WadeYin9712/repos", "events_url": "https://api.github.com/users/WadeYin9712/events{/privacy}", "received_events_url": "https://api.github.com/users/WadeYin9712/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
0
"2021-09-25T20:58:15"
"2021-10-03T20:34:22"
"2021-10-03T20:34:22"
NONE
null
**Is your feature request related to a problem? Please describe.** Would you like to add any vision-and-language datasets (e.g., VQA, VCR) to Huggingface Datasets? **Describe the solution you'd like** N/A **Describe alternatives you've considered** N/A **Additional context** This is Da Yin at UCLA. Recently, we have published an EMNLP 2021 paper about geo-diverse visual commonsense reasoning (https://arxiv.org/abs/2109.06860). We propose a new dataset called GD-VCR, a vision-and-language dataset to evaluate how well V&L models perform on scenarios involving geo-location-specific commonsense. We hope to have our V&L dataset incorporated into Huggingface to further promote our project, but I haven't seen much V&L datasets in the current package. Is it possible to add V&L datasets, and if so, how should we prepare for the loading? Thank you very much!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2967/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2967/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2966
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2966/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2966/comments
https://api.github.com/repos/huggingface/datasets/issues/2966/events
https://github.com/huggingface/datasets/pull/2966
1,007,142,233
PR_kwDODunzps4sRRMs
2,966
Upload greek-legal-code dataset
{ "login": "christospi", "id": 9130406, "node_id": "MDQ6VXNlcjkxMzA0MDY=", "avatar_url": "https://avatars.githubusercontent.com/u/9130406?v=4", "gravatar_id": "", "url": "https://api.github.com/users/christospi", "html_url": "https://github.com/christospi", "followers_url": "https://api.github.com/users/christospi/followers", "following_url": "https://api.github.com/users/christospi/following{/other_user}", "gists_url": "https://api.github.com/users/christospi/gists{/gist_id}", "starred_url": "https://api.github.com/users/christospi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/christospi/subscriptions", "organizations_url": "https://api.github.com/users/christospi/orgs", "repos_url": "https://api.github.com/users/christospi/repos", "events_url": "https://api.github.com/users/christospi/events{/privacy}", "received_events_url": "https://api.github.com/users/christospi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2021-09-25T16:52:15"
"2021-10-13T13:37:30"
"2021-10-13T13:37:30"
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2966/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2966/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2966", "html_url": "https://github.com/huggingface/datasets/pull/2966", "diff_url": "https://github.com/huggingface/datasets/pull/2966.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2966.patch", "merged_at": "2021-10-13T13:37:30" }
https://api.github.com/repos/huggingface/datasets/issues/2965
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2965/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2965/comments
https://api.github.com/repos/huggingface/datasets/issues/2965/events
https://github.com/huggingface/datasets/issues/2965
1,007,084,153
I_kwDODunzps48BuJ5
2,965
Invalid download URL of WMT17 `zh-en` data
{ "login": "Ririkoo", "id": 3339950, "node_id": "MDQ6VXNlcjMzMzk5NTA=", "avatar_url": "https://avatars.githubusercontent.com/u/3339950?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ririkoo", "html_url": "https://github.com/Ririkoo", "followers_url": "https://api.github.com/users/Ririkoo/followers", "following_url": "https://api.github.com/users/Ririkoo/following{/other_user}", "gists_url": "https://api.github.com/users/Ririkoo/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ririkoo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ririkoo/subscriptions", "organizations_url": "https://api.github.com/users/Ririkoo/orgs", "repos_url": "https://api.github.com/users/Ririkoo/repos", "events_url": "https://api.github.com/users/Ririkoo/events{/privacy}", "received_events_url": "https://api.github.com/users/Ririkoo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
open
false
null
[]
null
0
"2021-09-25T13:17:32"
"2022-01-19T14:09:48"
null
NONE
null
## Describe the bug Partial data (wmt17 zh-en) cannot be downloaded due to an invalid URL. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('wmt17','zh-en') ``` ## Expected results ConnectionError: Couldn't reach ftp://cwmt-wmt:cwmt-wmt@datasets.nju.edu.cn/parallel/casia2015.zip
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2965/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 1, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2965/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2964
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2964/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2964/comments
https://api.github.com/repos/huggingface/datasets/issues/2964/events
https://github.com/huggingface/datasets/issues/2964
1,006,605,904
I_kwDODunzps47_5ZQ
2,964
Error when calculating Matthews Correlation Coefficient loaded with `load_metric`
{ "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
"2021-09-24T15:55:21"
"2021-09-25T08:06:07"
"2021-09-25T08:06:07"
NONE
null
## Describe the bug After loading the metric named "[Matthews Correlation Coefficient](https://huggingface.co/metrics/matthews_correlation)" from `🤗datasets`, the `.compute` method fails with the following exception `AttributeError: 'float' object has no attribute 'item'` (complete stack trace can be provided if required). ## Steps to reproduce the bug ```python import torch predictions = torch.ones((10,)) references = torch.zeros((10,)) from datasets import load_metric METRIC = load_metric("matthews_correlation") result = METRIC.compute(predictions=predictions, references=references) ``` ## Expected results We should expect a Python `dict` as it follows: ``` { "matthews_correlation": float() } ``` as defined in https://github.com/huggingface/datasets/blob/master/metrics/matthews_correlation/matthews_correlation.py, so the fix will imply removing `.item()`, since the value returned by the `scikit-learn` function is not a `torch.Tensor` but a `float`, which means that the `.item()` will fail. ## Actual results ``` Traceback (most recent call last): File "/home/alvaro.bartolome/XXX/xxx/cli.py", line 59, in main app() File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/typer/main.py", line 214, in __call__ return get_command(self)(*args, **kwargs) File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/click/core.py", line 1137, in __call__ return self.main(*args, **kwargs) File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/click/core.py", line 1062, in main rv = self.invoke(ctx) File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/click/core.py", line 1668, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/click/core.py", line 1404, in invoke return ctx.invoke(self.callback, **ctx.params) File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/click/core.py", line 763, in invoke return __callback(*args, **kwargs) File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/typer/main.py", line 500, in wrapper return callback(**use_params) # type: ignore File "/home/alvaro.bartolome/XXX/xxx/cli.py", line 43, in train metrics = trainer.evaluate() File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/transformers/trainer.py", line 2051, in evaluate output = eval_loop( File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/transformers/trainer.py", line 2292, in evaluation_loop metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels)) File "/home/alvaro.bartolome/XXX/xxx/metrics.py", line 20, in compute_metrics res = METRIC.compute(predictions=predictions, references=eval_preds.label_ids) File "/home/alvaro.bartolome/miniconda3/envs/lang/lib/python3.9/site-packages/datasets/metric.py", line 402, in compute output = self._compute(predictions=predictions, references=references, **kwargs) File "/home/alvaro.bartolome/.cache/huggingface/modules/datasets_modules/metrics/matthews_correlation/0275f1e9a4d318e3ea8cdd87547ee0d58d894966616052e3d18444ac8ddd2357/matthews_correlation.py", line 88, in _compute "matthews_correlation": matthews_corrcoef(references, predictions, sample_weight=sample_weight).item(), AttributeError: 'float' object has no attribute 'item' ``` ## Environment info - `datasets` version: 1.12.1 - Platform: Linux-4.15.0-1113-azure-x86_64-with-glibc2.23 - Python version: 3.9.7 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2964/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2964/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2963
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2963/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2963/comments
https://api.github.com/repos/huggingface/datasets/issues/2963/events
https://github.com/huggingface/datasets/issues/2963
1,006,588,605
I_kwDODunzps47_1K9
2,963
raise TypeError( TypeError: Provided `function` which is applied to all elements of table returns a variable of type <class 'list'>. Make sure provided `function` returns a variable of type `dict` to update the dataset or `None` if you are only interested in side effects.
{ "login": "keloemma", "id": 40454218, "node_id": "MDQ6VXNlcjQwNDU0MjE4", "avatar_url": "https://avatars.githubusercontent.com/u/40454218?v=4", "gravatar_id": "", "url": "https://api.github.com/users/keloemma", "html_url": "https://github.com/keloemma", "followers_url": "https://api.github.com/users/keloemma/followers", "following_url": "https://api.github.com/users/keloemma/following{/other_user}", "gists_url": "https://api.github.com/users/keloemma/gists{/gist_id}", "starred_url": "https://api.github.com/users/keloemma/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/keloemma/subscriptions", "organizations_url": "https://api.github.com/users/keloemma/orgs", "repos_url": "https://api.github.com/users/keloemma/repos", "events_url": "https://api.github.com/users/keloemma/events{/privacy}", "received_events_url": "https://api.github.com/users/keloemma/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
0
"2021-09-24T15:35:11"
"2021-09-24T15:38:24"
"2021-09-24T15:38:24"
NONE
null
## Describe the bug A clear and concise description of what the bug is. I am trying to use Dataset to load my file in order to use Bert embeddings model baut when I finished loading using dataset and I want to pass to the tokenizer using the function map; I get the following error : raise TypeError( TypeError: Provided `function` which is applied to all elements of table returns a variable of type <class 'list'>. Make sure provided `function` returns a variable of type `dict` to update the dataset or `None` if you are only interested in side effects. I was able to load my file using dataset before but since this morning , I keep getting this erreor. ## Steps to reproduce the bug ```python # Xtrain, ytrain, filename, len_labels = read_file_2(fic) # Xtrain, lge_size = get_flaubert_layer(Xtrain, path_to_model_lge) data_preprocessed = make_new_traindata(Xtrain) my_dict = {"verbatim": data_preprocessed[1], "label": ytrain} # lemme avec conjonction dataset = Dataset.from_dict(my_dict) ``` ## Expected results A clear and concise description of the expected results. ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: - Python version: - PyArrow version:
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2963/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2963/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2962
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2962/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2962/comments
https://api.github.com/repos/huggingface/datasets/issues/2962/events
https://github.com/huggingface/datasets/issues/2962
1,006,557,666
I_kwDODunzps47_tni
2,962
Enable splits during streaming the dataset
{ "login": "merveenoyan", "id": 53175384, "node_id": "MDQ6VXNlcjUzMTc1Mzg0", "avatar_url": "https://avatars.githubusercontent.com/u/53175384?v=4", "gravatar_id": "", "url": "https://api.github.com/users/merveenoyan", "html_url": "https://github.com/merveenoyan", "followers_url": "https://api.github.com/users/merveenoyan/followers", "following_url": "https://api.github.com/users/merveenoyan/following{/other_user}", "gists_url": "https://api.github.com/users/merveenoyan/gists{/gist_id}", "starred_url": "https://api.github.com/users/merveenoyan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/merveenoyan/subscriptions", "organizations_url": "https://api.github.com/users/merveenoyan/orgs", "repos_url": "https://api.github.com/users/merveenoyan/repos", "events_url": "https://api.github.com/users/merveenoyan/events{/privacy}", "received_events_url": "https://api.github.com/users/merveenoyan/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
0
"2021-09-24T15:01:29"
"2021-09-24T15:01:29"
null
CONTRIBUTOR
null
## Describe the Problem I'd like to stream only a specific percentage or part of the dataset. I want to do splitting when I'm streaming dataset as well. ## Solution Enabling splits when `streaming = True` as well. `e.g. dataset = load_dataset('dataset', split='train[:100]', streaming = True)` ## Alternatives Below is the alternative of doing it. `dataset = load_dataset("dataset", split='train', streaming = True).take(100)`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2962/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2962/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2961
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2961/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2961/comments
https://api.github.com/repos/huggingface/datasets/issues/2961/events
https://github.com/huggingface/datasets/pull/2961
1,006,453,781
PR_kwDODunzps4sPTXV
2,961
Fix CI doc build
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2021-09-24T13:13:28"
"2021-09-24T13:18:07"
"2021-09-24T13:18:07"
MEMBER
null
Pin `fsspec`. Before the issue: 'fsspec-2021.8.1', 's3fs-2021.8.1' Generating the issue: 'fsspec-2021.9.0', 's3fs-0.5.1'
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2961/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2961/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2961", "html_url": "https://github.com/huggingface/datasets/pull/2961", "diff_url": "https://github.com/huggingface/datasets/pull/2961.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2961.patch", "merged_at": "2021-09-24T13:18:07" }
https://api.github.com/repos/huggingface/datasets/issues/2960
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2960/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2960/comments
https://api.github.com/repos/huggingface/datasets/issues/2960/events
https://github.com/huggingface/datasets/pull/2960
1,006,222,850
PR_kwDODunzps4sOl0Y
2,960
Support pandas 1.3 new `read_csv` parameters
{ "login": "SBrandeis", "id": 33657802, "node_id": "MDQ6VXNlcjMzNjU3ODAy", "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SBrandeis", "html_url": "https://github.com/SBrandeis", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "repos_url": "https://api.github.com/users/SBrandeis/repos", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2021-09-24T08:37:24"
"2021-09-24T11:22:31"
"2021-09-24T11:22:30"
CONTRIBUTOR
null
Support two new arguments introduced in pandas v1.3.0: - `encoding_errors` - `on_bad_lines` `read_csv` reference: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2960/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2960/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2960", "html_url": "https://github.com/huggingface/datasets/pull/2960", "diff_url": "https://github.com/huggingface/datasets/pull/2960.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2960.patch", "merged_at": "2021-09-24T11:22:30" }
https://api.github.com/repos/huggingface/datasets/issues/2959
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2959/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2959/comments
https://api.github.com/repos/huggingface/datasets/issues/2959/events
https://github.com/huggingface/datasets/pull/2959
1,005,547,632
PR_kwDODunzps4sMihl
2,959
Added computer vision tasks
{ "login": "merveenoyan", "id": 53175384, "node_id": "MDQ6VXNlcjUzMTc1Mzg0", "avatar_url": "https://avatars.githubusercontent.com/u/53175384?v=4", "gravatar_id": "", "url": "https://api.github.com/users/merveenoyan", "html_url": "https://github.com/merveenoyan", "followers_url": "https://api.github.com/users/merveenoyan/followers", "following_url": "https://api.github.com/users/merveenoyan/following{/other_user}", "gists_url": "https://api.github.com/users/merveenoyan/gists{/gist_id}", "starred_url": "https://api.github.com/users/merveenoyan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/merveenoyan/subscriptions", "organizations_url": "https://api.github.com/users/merveenoyan/orgs", "repos_url": "https://api.github.com/users/merveenoyan/repos", "events_url": "https://api.github.com/users/merveenoyan/events{/privacy}", "received_events_url": "https://api.github.com/users/merveenoyan/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
4
"2021-09-23T15:07:27"
"2022-02-04T14:16:58"
null
CONTRIBUTOR
null
Added various image processing/computer vision tasks.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2959/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2959/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2959", "html_url": "https://github.com/huggingface/datasets/pull/2959", "diff_url": "https://github.com/huggingface/datasets/pull/2959.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2959.patch", "merged_at": null }
https://api.github.com/repos/huggingface/datasets/issues/2958
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2958/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2958/comments
https://api.github.com/repos/huggingface/datasets/issues/2958/events
https://github.com/huggingface/datasets/pull/2958
1,005,144,601
PR_kwDODunzps4sLTaB
2,958
Add security policy to the project
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2021-09-23T08:20:55"
"2021-10-21T15:16:44"
"2021-10-21T15:16:43"
MEMBER
null
Add security policy to the project, as recommended by GitHub: https://docs.github.com/en/code-security/getting-started/adding-a-security-policy-to-your-repository Close #2953.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2958/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2958/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2958", "html_url": "https://github.com/huggingface/datasets/pull/2958", "diff_url": "https://github.com/huggingface/datasets/pull/2958.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2958.patch", "merged_at": "2021-10-21T15:16:43" }
https://api.github.com/repos/huggingface/datasets/issues/2957
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2957/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2957/comments
https://api.github.com/repos/huggingface/datasets/issues/2957/events
https://github.com/huggingface/datasets/issues/2957
1,004,868,337
I_kwDODunzps475RLx
2,957
MultiWOZ Dataset NonMatchingChecksumError
{ "login": "bradyneal", "id": 8754873, "node_id": "MDQ6VXNlcjg3NTQ4NzM=", "avatar_url": "https://avatars.githubusercontent.com/u/8754873?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bradyneal", "html_url": "https://github.com/bradyneal", "followers_url": "https://api.github.com/users/bradyneal/followers", "following_url": "https://api.github.com/users/bradyneal/following{/other_user}", "gists_url": "https://api.github.com/users/bradyneal/gists{/gist_id}", "starred_url": "https://api.github.com/users/bradyneal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bradyneal/subscriptions", "organizations_url": "https://api.github.com/users/bradyneal/orgs", "repos_url": "https://api.github.com/users/bradyneal/repos", "events_url": "https://api.github.com/users/bradyneal/events{/privacy}", "received_events_url": "https://api.github.com/users/bradyneal/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
1
"2021-09-22T23:45:00"
"2021-10-01T06:23:32"
null
NONE
null
## Describe the bug The checksums for the downloaded MultiWOZ dataset and source MultiWOZ dataset aren't matching. ## Steps to reproduce the bug Both of the below dataset versions yield the checksum error: ```python from datasets import load_dataset dataset = load_dataset('multi_woz_v22', 'v2.2') dataset = load_dataset('multi_woz_v22', 'v2.2_active_only') ``` ## Expected results For the above calls to `load_dataset` to work. ## Actual results NonMatchingChecksumError. Traceback: > Traceback (most recent call last): File "/Users/brady/anaconda3/envs/elysium/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3441, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-15-4e91280e112e>", line 1, in <module> dataset = load_dataset('multi_woz_v22', 'v2.2') File "/Users/brady/anaconda3/envs/elysium/lib/python3.8/site-packages/datasets/load.py", line 847, in load_dataset builder_instance.download_and_prepare( File "/Users/brady/anaconda3/envs/elysium/lib/python3.8/site-packages/datasets/builder.py", line 615, in download_and_prepare self._download_and_prepare( File "/Users/brady/anaconda3/envs/elysium/lib/python3.8/site-packages/datasets/builder.py", line 675, in _download_and_prepare verify_checksums( File "/Users/brady/anaconda3/envs/elysium/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dialog_acts.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/test/dialogues_001.json'] ## Environment info - `datasets` version: 1.11.0 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.10 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2957/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2957/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2956
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2956/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2956/comments
https://api.github.com/repos/huggingface/datasets/issues/2956/events
https://github.com/huggingface/datasets/issues/2956
1,004,306,367
I_kwDODunzps473H-_
2,956
Cache problem in the `load_dataset` method for local compressed file(s)
{ "login": "SaulLu", "id": 55560583, "node_id": "MDQ6VXNlcjU1NTYwNTgz", "avatar_url": "https://avatars.githubusercontent.com/u/55560583?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SaulLu", "html_url": "https://github.com/SaulLu", "followers_url": "https://api.github.com/users/SaulLu/followers", "following_url": "https://api.github.com/users/SaulLu/following{/other_user}", "gists_url": "https://api.github.com/users/SaulLu/gists{/gist_id}", "starred_url": "https://api.github.com/users/SaulLu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SaulLu/subscriptions", "organizations_url": "https://api.github.com/users/SaulLu/orgs", "repos_url": "https://api.github.com/users/SaulLu/repos", "events_url": "https://api.github.com/users/SaulLu/events{/privacy}", "received_events_url": "https://api.github.com/users/SaulLu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
0
"2021-09-22T13:34:32"
"2021-09-22T13:34:32"
null
NONE
null
## Describe the bug Cache problem in the `load_dataset` method: when modifying a compressed file in a local folder `load_dataset` doesn't detect the change and load the previous version. ## Steps to reproduce the bug To test it directly, I have prepared a [Google Colaboratory notebook](https://colab.research.google.com/drive/11Em_Amoc-aPGhSBIkSHU2AvEh24nVayy?usp=sharing) that shows this behavior. For this example, I have created a toy dataset at: https://huggingface.co/datasets/SaulLu/toy_struc_dataset This dataset is composed of two versions: - v1 on commit `a6beb46` which has a single example `{'id': 1, 'value': {'tag': 'a', 'value': 1}}` in file `train.jsonl.gz` - v2 on commit `e7935f4` (`main` head) which has a single example `{'attr': 1, 'id': 1, 'value': 'a'}` in file `train.jsonl.gz` With a terminal, we can start to get the v1 version of the dataset ```bash git lfs install git clone https://huggingface.co/datasets/SaulLu/toy_struc_dataset cd toy_struc_dataset git checkout a6beb46 ``` Then we can load it with python and look at the content: ```python from datasets import load_dataset path = "/content/toy_struc_dataset" dataset = load_dataset(path, data_files={"train": "*.jsonl.gz"}) print(dataset["train"][0]) ``` Output ``` {'id': 1, 'value': {'tag': 'a', 'value': 1}} # This is the example in v1 ``` With a terminal, we can now start to get the v1 version of the dataset ```bash git checkout main ``` Then we can load it with python and look at the content: ```python from datasets import load_dataset path = "/content/toy_struc_dataset" dataset = load_dataset(path, data_files={"train": "*.jsonl.gz"}) print(dataset["train"][0]) ``` Output ``` {'id': 1, 'value': {'tag': 'a', 'value': 1}} # This is the example in v1 (not v2) ``` ## Expected results The last output should have been ``` {"id":1, "value": "a", "attr": 1} # This is the example in v2 ``` ## Ideas As discussed offline with Quentin, if the cache hash was ever sensitive to changes in a compressed file we would probably not have the problem anymore. This situation leads me to suggest 2 other features: - to also have an `load_from_cache_file` argument in the "load_dataset" method - to reorganize the cache so that we can delete the caches related to a dataset (cf issue #ToBeFilledSoon) And thanks again for this great library :hugs: ## Environment info - `datasets` version: 1.12.1 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2956/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2956/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2955
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2955/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2955/comments
https://api.github.com/repos/huggingface/datasets/issues/2955/events
https://github.com/huggingface/datasets/pull/2955
1,003,999,469
PR_kwDODunzps4sHuRu
2,955
Update legacy Python image for CI tests in Linux
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2021-09-22T08:25:27"
"2021-09-24T10:36:05"
"2021-09-24T10:36:05"
MEMBER
null
Instead of legacy, use next-generation convenience images, built from the ground up with CI, efficiency, and determinism in mind. Here are some of the highlights: - Faster spin-up time - In Docker terminology, these next-gen images will generally have fewer and smaller layers. Using these new images will lead to faster image downloads when a build starts, and a higher likelihood that the image is already cached on the host. - Improved reliability and stability - The existing legacy convenience images are rebuilt practically every day with potential changes from upstream that we cannot always test fast enough. This leads to frequent breaking changes, which is not the best environment for stable, deterministic builds. Next-gen images will only be rebuilt for security and critical-bugs, leading to more stable and deterministic images. More info: https://circleci.com/docs/2.0/circleci-images
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2955/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2955/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2955", "html_url": "https://github.com/huggingface/datasets/pull/2955", "diff_url": "https://github.com/huggingface/datasets/pull/2955.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2955.patch", "merged_at": "2021-09-24T10:36:05" }
https://api.github.com/repos/huggingface/datasets/issues/2954
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2954/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2954/comments
https://api.github.com/repos/huggingface/datasets/issues/2954/events
https://github.com/huggingface/datasets/pull/2954
1,003,904,803
PR_kwDODunzps4sHa8O
2,954
Run tests in parallel
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
"2021-09-22T07:00:44"
"2021-09-28T06:55:51"
"2021-09-28T06:55:51"
MEMBER
null
Run CI tests in parallel to speed up the test suite. Speed up results: - Linux: from `7m 30s` to `5m 32s` - Windows: from `13m 52s` to `11m 10s`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2954/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2954/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2954", "html_url": "https://github.com/huggingface/datasets/pull/2954", "diff_url": "https://github.com/huggingface/datasets/pull/2954.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2954.patch", "merged_at": "2021-09-28T06:55:51" }
https://api.github.com/repos/huggingface/datasets/issues/2953
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2953/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2953/comments
https://api.github.com/repos/huggingface/datasets/issues/2953/events
https://github.com/huggingface/datasets/issues/2953
1,002,766,517
I_kwDODunzps47xQC1
2,953
Trying to get in touch regarding a security issue
{ "login": "JamieSlome", "id": 55323451, "node_id": "MDQ6VXNlcjU1MzIzNDUx", "avatar_url": "https://avatars.githubusercontent.com/u/55323451?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JamieSlome", "html_url": "https://github.com/JamieSlome", "followers_url": "https://api.github.com/users/JamieSlome/followers", "following_url": "https://api.github.com/users/JamieSlome/following{/other_user}", "gists_url": "https://api.github.com/users/JamieSlome/gists{/gist_id}", "starred_url": "https://api.github.com/users/JamieSlome/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JamieSlome/subscriptions", "organizations_url": "https://api.github.com/users/JamieSlome/orgs", "repos_url": "https://api.github.com/users/JamieSlome/repos", "events_url": "https://api.github.com/users/JamieSlome/events{/privacy}", "received_events_url": "https://api.github.com/users/JamieSlome/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
1
"2021-09-21T15:58:13"
"2021-10-21T15:16:43"
"2021-10-21T15:16:43"
NONE
null
Hey there! I'd like to report a security issue but cannot find contact instructions on your repository. If not a hassle, might you kindly add a `SECURITY.md` file with an email, or another contact method? GitHub [recommends](https://docs.github.com/en/code-security/getting-started/adding-a-security-policy-to-your-repository) this best practice to ensure security issues are responsibly disclosed, and it would serve as a simple instruction for security researchers in the future. Thank you for your consideration, and I look forward to hearing from you! (cc @huntr-helper)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2953/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2953/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2952
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2952/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2952/comments
https://api.github.com/repos/huggingface/datasets/issues/2952/events
https://github.com/huggingface/datasets/pull/2952
1,002,704,096
PR_kwDODunzps4sDU8S
2,952
Fix missing conda deps
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2021-09-21T15:23:01"
"2021-09-22T04:39:59"
"2021-09-21T15:30:44"
MEMBER
null
`aiohttp` was added as a dependency in #2662 but was missing for the conda build, which causes the 1.12.0 and 1.12.1 to fail. Fix #2932.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2952/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2952/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2952", "html_url": "https://github.com/huggingface/datasets/pull/2952", "diff_url": "https://github.com/huggingface/datasets/pull/2952.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2952.patch", "merged_at": "2021-09-21T15:30:44" }
https://api.github.com/repos/huggingface/datasets/issues/2951
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2951/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2951/comments
https://api.github.com/repos/huggingface/datasets/issues/2951/events
https://github.com/huggingface/datasets/pull/2951
1,001,267,888
PR_kwDODunzps4r-lGs
2,951
Dummy labels no longer on by default in `to_tf_dataset`
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
"2021-09-20T18:26:59"
"2021-09-21T14:00:57"
"2021-09-21T10:14:32"
MEMBER
null
After more experimentation, I think I have a way to do things that doesn't depend on adding `dummy_labels` - they were quite a hacky solution anyway!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2951/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2951/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2951", "html_url": "https://github.com/huggingface/datasets/pull/2951", "diff_url": "https://github.com/huggingface/datasets/pull/2951.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2951.patch", "merged_at": "2021-09-21T10:14:32" }
https://api.github.com/repos/huggingface/datasets/issues/2950
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2950/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2950/comments
https://api.github.com/repos/huggingface/datasets/issues/2950/events
https://github.com/huggingface/datasets/pull/2950
1,001,085,353
PR_kwDODunzps4r-AKu
2,950
Fix fn kwargs in filter
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2021-09-20T15:10:26"
"2021-09-20T16:22:59"
"2021-09-20T15:28:01"
MEMBER
null
#2836 broke the `fn_kwargs` parameter of `filter`, as mentioned in https://github.com/huggingface/datasets/issues/2927 I fixed that and added a test to make sure it doesn't happen again (for either map or filter) Fix #2927
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2950/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2950/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2950", "html_url": "https://github.com/huggingface/datasets/pull/2950", "diff_url": "https://github.com/huggingface/datasets/pull/2950.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2950.patch", "merged_at": "2021-09-20T15:28:01" }
https://api.github.com/repos/huggingface/datasets/issues/2949
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2949/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2949/comments
https://api.github.com/repos/huggingface/datasets/issues/2949/events
https://github.com/huggingface/datasets/pull/2949
1,001,026,680
PR_kwDODunzps4r90Pt
2,949
Introduce web and wiki config in triviaqa dataset
{ "login": "shirte", "id": 1706443, "node_id": "MDQ6VXNlcjE3MDY0NDM=", "avatar_url": "https://avatars.githubusercontent.com/u/1706443?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shirte", "html_url": "https://github.com/shirte", "followers_url": "https://api.github.com/users/shirte/followers", "following_url": "https://api.github.com/users/shirte/following{/other_user}", "gists_url": "https://api.github.com/users/shirte/gists{/gist_id}", "starred_url": "https://api.github.com/users/shirte/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shirte/subscriptions", "organizations_url": "https://api.github.com/users/shirte/orgs", "repos_url": "https://api.github.com/users/shirte/repos", "events_url": "https://api.github.com/users/shirte/events{/privacy}", "received_events_url": "https://api.github.com/users/shirte/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
"2021-09-20T14:17:23"
"2021-10-05T13:20:52"
"2021-10-01T15:39:29"
CONTRIBUTOR
null
The TriviaQA paper suggests that the two subsets (Wikipedia and Web) should be treated differently. There are also different leaderboards for the two sets on CodaLab. For that reason, introduce additional builder configs in the trivia_qa dataset.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2949/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2949/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2949", "html_url": "https://github.com/huggingface/datasets/pull/2949", "diff_url": "https://github.com/huggingface/datasets/pull/2949.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2949.patch", "merged_at": "2021-10-01T15:39:29" }
https://api.github.com/repos/huggingface/datasets/issues/2948
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2948/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2948/comments
https://api.github.com/repos/huggingface/datasets/issues/2948/events
https://github.com/huggingface/datasets/pull/2948
1,000,844,077
PR_kwDODunzps4r9PdV
2,948
Fix minor URL format in scitldr dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2021-09-20T11:11:32"
"2021-09-20T13:18:28"
"2021-09-20T13:18:28"
MEMBER
null
While investigating issue #2918, I found this minor format issues in the URLs (if runned in a Windows machine).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2948/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2948/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2948", "html_url": "https://github.com/huggingface/datasets/pull/2948", "diff_url": "https://github.com/huggingface/datasets/pull/2948.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2948.patch", "merged_at": "2021-09-20T13:18:28" }
https://api.github.com/repos/huggingface/datasets/issues/2947
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2947/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2947/comments
https://api.github.com/repos/huggingface/datasets/issues/2947/events
https://github.com/huggingface/datasets/pull/2947
1,000,798,338
PR_kwDODunzps4r9GIP
2,947
Don't use old, incompatible cache for the new `filter`
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2021-09-20T10:18:59"
"2021-09-20T16:25:09"
"2021-09-20T13:43:02"
MEMBER
null
#2836 changed `Dataset.filter` and the resulting data that are stored in the cache are different and incompatible with the ones of the previous `filter` implementation. However the caching mechanism wasn't able to differentiate between the old and the new implementation of filter (only the method name was taken into account). This is an issue because anyone that update `datasets` and re-runs some code that uses `filter` would see an error, because the cache would try to load an incompatible `filter` result. To fix this I added the notion of versioning for dataset transform in the caching mechanism, and bumped the version of the `filter` implementation to 2.0.0 This way the new `filter` outputs are now considered different from the old ones from the caching point of view. This should fix #2943 cc @anton-l
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2947/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2947/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2947", "html_url": "https://github.com/huggingface/datasets/pull/2947", "diff_url": "https://github.com/huggingface/datasets/pull/2947.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2947.patch", "merged_at": "2021-09-20T13:43:01" }
https://api.github.com/repos/huggingface/datasets/issues/2946
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2946/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2946/comments
https://api.github.com/repos/huggingface/datasets/issues/2946/events
https://github.com/huggingface/datasets/pull/2946
1,000,754,824
PR_kwDODunzps4r89f8
2,946
Update meteor score from nltk update
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2021-09-20T09:28:46"
"2021-09-20T09:35:59"
"2021-09-20T09:35:59"
MEMBER
null
It looks like there were issues in NLTK on the way the METEOR score was computed. A fix was added in NLTK at https://github.com/nltk/nltk/pull/2763, and therefore the scoring function no longer returns the same values. I updated the score of the example in the docs
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2946/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2946/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2946", "html_url": "https://github.com/huggingface/datasets/pull/2946", "diff_url": "https://github.com/huggingface/datasets/pull/2946.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2946.patch", "merged_at": "2021-09-20T09:35:59" }
https://api.github.com/repos/huggingface/datasets/issues/2945
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2945/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2945/comments
https://api.github.com/repos/huggingface/datasets/issues/2945/events
https://github.com/huggingface/datasets/issues/2945
1,000,624,883
I_kwDODunzps47pFLz
2,945
Protect master branch
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
2
"2021-09-20T06:47:01"
"2021-09-20T12:01:27"
"2021-09-20T12:00:16"
MEMBER
null
After accidental merge commit (91c55355b634d0dc73350a7ddee1a6776dbbdd69) into `datasets` master branch, all commits present in the feature branch were permanently added to `datasets` master branch history, as e.g.: - 00cc036fea7c7745cfe722360036ed306796a3f2 - 13ae8c98602bbad8197de3b9b425f4c78f582af1 - ... I propose to protect our master branch, so that we avoid we can accidentally make this kind of mistakes in the future: - [x] For Pull Requests using GitHub, allow only squash merging, so that only a single commit per Pull Request is merged into the master branch - Currently, simple merge commits are already disabled - I propose to disable rebase merging as well - ~~Protect the master branch from direct pushes (to avoid accidentally pushing of merge commits)~~ - ~~This protection would reject direct pushes to master branch~~ - ~~If so, for each release (when we need to commit directly to the master branch), we should previously disable the protection and re-enable it again after the release~~ - [x] Protect the master branch only from direct pushing of **merge commits** - GitHub offers the possibility to protect the master branch only from merge commits (which are the ones that introduce all the commits from the feature branch into the master branch). - No need to disable/re-enable this protection on each release This purpose of this Issue is to open a discussion about this problem and to agree in a solution.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2945/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2945/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2944
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2944/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2944/comments
https://api.github.com/repos/huggingface/datasets/issues/2944/events
https://github.com/huggingface/datasets/issues/2944
1,000,544,370
I_kwDODunzps47oxhy
2,944
Add `remove_columns` to `IterableDataset `
{ "login": "cccntu", "id": 31893406, "node_id": "MDQ6VXNlcjMxODkzNDA2", "avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cccntu", "html_url": "https://github.com/cccntu", "followers_url": "https://api.github.com/users/cccntu/followers", "following_url": "https://api.github.com/users/cccntu/following{/other_user}", "gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}", "starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cccntu/subscriptions", "organizations_url": "https://api.github.com/users/cccntu/orgs", "repos_url": "https://api.github.com/users/cccntu/repos", "events_url": "https://api.github.com/users/cccntu/events{/privacy}", "received_events_url": "https://api.github.com/users/cccntu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
closed
false
null
[]
null
1
"2021-09-20T04:01:00"
"2021-10-08T15:31:53"
"2021-10-08T15:31:53"
CONTRIBUTOR
null
**Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. ```python from datasets import load_dataset dataset = load_dataset("c4", 'realnewslike', streaming =True, split='train') dataset = dataset.remove_columns('url') ``` ``` AttributeError: 'IterableDataset' object has no attribute 'remove_columns' ``` **Describe the solution you'd like** It would be nice to have `.remove_columns()` to match the `Datasets` api. **Describe alternatives you've considered** This can be done with a single call to `.map()`, I can try to help add this. 🤗
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2944/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2944/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2943
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2943/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2943/comments
https://api.github.com/repos/huggingface/datasets/issues/2943/events
https://github.com/huggingface/datasets/issues/2943
1,000,355,115
I_kwDODunzps47oDUr
2,943
Backwards compatibility broken for cached datasets that use `.filter()`
{ "login": "anton-l", "id": 26864830, "node_id": "MDQ6VXNlcjI2ODY0ODMw", "avatar_url": "https://avatars.githubusercontent.com/u/26864830?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anton-l", "html_url": "https://github.com/anton-l", "followers_url": "https://api.github.com/users/anton-l/followers", "following_url": "https://api.github.com/users/anton-l/following{/other_user}", "gists_url": "https://api.github.com/users/anton-l/gists{/gist_id}", "starred_url": "https://api.github.com/users/anton-l/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anton-l/subscriptions", "organizations_url": "https://api.github.com/users/anton-l/orgs", "repos_url": "https://api.github.com/users/anton-l/repos", "events_url": "https://api.github.com/users/anton-l/events{/privacy}", "received_events_url": "https://api.github.com/users/anton-l/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
6
"2021-09-19T16:16:37"
"2021-09-20T16:25:43"
"2021-09-20T16:25:42"
MEMBER
null
## Describe the bug After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with `ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}` Related feature: https://github.com/huggingface/datasets/pull/2836 :question: This is probably a `wontfix` bug, since it can be solved by simply cleaning the related cache dirs, but the workaround could be useful for someone googling the error :) ## Workaround Remove the cache for the given dataset, e.g. `rm -rf ~/.cache/huggingface/datasets/librispeech_asr`. ## Steps to reproduce the bug 1. Delete `~/.cache/huggingface/datasets/librispeech_asr` if it exists. 2. `pip install datasets==1.11.0` and run the following snippet: ```python from datasets import load_dataset ids = ["1272-141231-0000"] ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") ds = ds.filter(lambda x: x["id"] in ids) ``` 3. `pip install datasets==1.12.1` and re-run the code again ## Expected results Same result as with the previous `datasets` version. ## Actual results ```bash Reusing dataset librispeech_asr (./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1) Loading cached processed dataset at ./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1/cache-cd1c29844fdbc87a.arrow Traceback (most recent call last): File "./repos/transformers/src/transformers/models/wav2vec2/try_dataset.py", line 5, in <module> ds = ds.filter(lambda x: x["id"] in ids) File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper out = func(self, *args, **kwargs) File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2169, in filter indices = self.map( File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1686, in map return self._map_single( File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper out = func(self, *args, **kwargs) File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1896, in _map_single return Dataset.from_file(cache_file_name, info=info, split=self.split) File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 343, in from_file return cls( File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 282, in __init__ self.info.features = self.info.features.reorder_fields_as(inferred_features) File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1151, in reorder_fields_as return Features(recursive_reorder(self, other)) File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1140, in recursive_reorder raise ValueError(f"Keys mismatch: between {source} and {target}" + stack_position) ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)} Process finished with exit code 1 ``` ## Environment info - `datasets` version: 1.12.1 - Platform: Linux-5.11.0-34-generic-x86_64-with-glibc2.17 - Python version: 3.8.10 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2943/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2943/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2942
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2942/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2942/comments
https://api.github.com/repos/huggingface/datasets/issues/2942/events
https://github.com/huggingface/datasets/pull/2942
1,000,309,765
PR_kwDODunzps4r7tY6
2,942
Add SEDE dataset
{ "login": "Hazoom", "id": 13545154, "node_id": "MDQ6VXNlcjEzNTQ1MTU0", "avatar_url": "https://avatars.githubusercontent.com/u/13545154?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Hazoom", "html_url": "https://github.com/Hazoom", "followers_url": "https://api.github.com/users/Hazoom/followers", "following_url": "https://api.github.com/users/Hazoom/following{/other_user}", "gists_url": "https://api.github.com/users/Hazoom/gists{/gist_id}", "starred_url": "https://api.github.com/users/Hazoom/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Hazoom/subscriptions", "organizations_url": "https://api.github.com/users/Hazoom/orgs", "repos_url": "https://api.github.com/users/Hazoom/repos", "events_url": "https://api.github.com/users/Hazoom/events{/privacy}", "received_events_url": "https://api.github.com/users/Hazoom/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
4
"2021-09-19T13:11:24"
"2021-09-24T10:39:55"
"2021-09-24T10:39:54"
CONTRIBUTOR
null
This PR adds the SEDE dataset for the task of realistic Text-to-SQL, following the instructions of how to add a database and a dataset card. Please see our paper for more details: https://arxiv.org/abs/2106.05006
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2942/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2942/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2942", "html_url": "https://github.com/huggingface/datasets/pull/2942", "diff_url": "https://github.com/huggingface/datasets/pull/2942.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2942.patch", "merged_at": "2021-09-24T10:39:54" }
https://api.github.com/repos/huggingface/datasets/issues/2941
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2941/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2941/comments
https://api.github.com/repos/huggingface/datasets/issues/2941/events
https://github.com/huggingface/datasets/issues/2941
1,000,000,711
I_kwDODunzps47mszH
2,941
OSCAR unshuffled_original_ko: NonMatchingSplitsSizesError
{ "login": "ayaka14732", "id": 68557794, "node_id": "MDQ6VXNlcjY4NTU3Nzk0", "avatar_url": "https://avatars.githubusercontent.com/u/68557794?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ayaka14732", "html_url": "https://github.com/ayaka14732", "followers_url": "https://api.github.com/users/ayaka14732/followers", "following_url": "https://api.github.com/users/ayaka14732/following{/other_user}", "gists_url": "https://api.github.com/users/ayaka14732/gists{/gist_id}", "starred_url": "https://api.github.com/users/ayaka14732/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ayaka14732/subscriptions", "organizations_url": "https://api.github.com/users/ayaka14732/orgs", "repos_url": "https://api.github.com/users/ayaka14732/repos", "events_url": "https://api.github.com/users/ayaka14732/events{/privacy}", "received_events_url": "https://api.github.com/users/ayaka14732/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
open
false
null
[]
null
1
"2021-09-18T10:39:13"
"2022-01-19T14:10:07"
null
NONE
null
## Describe the bug Cannot download OSCAR `unshuffled_original_ko` due to `NonMatchingSplitsSizesError`. ## Steps to reproduce the bug ```python >>> dataset = datasets.load_dataset('oscar', 'unshuffled_original_ko') NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=25292102197, num_examples=7345075, dataset_name='oscar'), 'recorded': SplitInfo(name='train', num_bytes=25284578514, num_examples=7344907, dataset_name='oscar')}] ``` ## Expected results Loading is successful. ## Actual results Loading throws above error. ## Environment info - `datasets` version: 1.12.1 - Platform: Linux-5.4.0-81-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2941/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2941/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2940
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2940/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2940/comments
https://api.github.com/repos/huggingface/datasets/issues/2940/events
https://github.com/huggingface/datasets/pull/2940
999,680,796
PR_kwDODunzps4r6EUF
2,940
add swedish_medical_ner dataset
{ "login": "bwang482", "id": 6764450, "node_id": "MDQ6VXNlcjY3NjQ0NTA=", "avatar_url": "https://avatars.githubusercontent.com/u/6764450?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bwang482", "html_url": "https://github.com/bwang482", "followers_url": "https://api.github.com/users/bwang482/followers", "following_url": "https://api.github.com/users/bwang482/following{/other_user}", "gists_url": "https://api.github.com/users/bwang482/gists{/gist_id}", "starred_url": "https://api.github.com/users/bwang482/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bwang482/subscriptions", "organizations_url": "https://api.github.com/users/bwang482/orgs", "repos_url": "https://api.github.com/users/bwang482/repos", "events_url": "https://api.github.com/users/bwang482/events{/privacy}", "received_events_url": "https://api.github.com/users/bwang482/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2021-09-17T20:03:05"
"2021-10-05T12:13:34"
"2021-10-05T12:13:33"
CONTRIBUTOR
null
Adding the Swedish Medical NER dataset, listed in "Biomedical Datasets - BigScience Workshop 2021"
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2940/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2940/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2940", "html_url": "https://github.com/huggingface/datasets/pull/2940", "diff_url": "https://github.com/huggingface/datasets/pull/2940.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2940.patch", "merged_at": "2021-10-05T12:13:33" }
https://api.github.com/repos/huggingface/datasets/issues/2939
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2939/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2939/comments
https://api.github.com/repos/huggingface/datasets/issues/2939/events
https://github.com/huggingface/datasets/pull/2939
999,639,630
PR_kwDODunzps4r58Gu
2,939
MENYO-20k repo has moved, updating URL
{ "login": "cdleong", "id": 4109253, "node_id": "MDQ6VXNlcjQxMDkyNTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4109253?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cdleong", "html_url": "https://github.com/cdleong", "followers_url": "https://api.github.com/users/cdleong/followers", "following_url": "https://api.github.com/users/cdleong/following{/other_user}", "gists_url": "https://api.github.com/users/cdleong/gists{/gist_id}", "starred_url": "https://api.github.com/users/cdleong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cdleong/subscriptions", "organizations_url": "https://api.github.com/users/cdleong/orgs", "repos_url": "https://api.github.com/users/cdleong/repos", "events_url": "https://api.github.com/users/cdleong/events{/privacy}", "received_events_url": "https://api.github.com/users/cdleong/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2021-09-17T19:01:54"
"2021-09-21T15:31:37"
"2021-09-21T15:31:36"
CONTRIBUTOR
null
Dataset repo moved to https://github.com/uds-lsv/menyo-20k_MT, now editing URL to match. https://github.com/uds-lsv/menyo-20k_MT/blob/master/data/train.tsv is the file we're looking for
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2939/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2939/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2939", "html_url": "https://github.com/huggingface/datasets/pull/2939", "diff_url": "https://github.com/huggingface/datasets/pull/2939.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2939.patch", "merged_at": "2021-09-21T15:31:36" }
https://api.github.com/repos/huggingface/datasets/issues/2938
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2938/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2938/comments
https://api.github.com/repos/huggingface/datasets/issues/2938/events
https://github.com/huggingface/datasets/pull/2938
999,552,263
PR_kwDODunzps4r5qwa
2,938
Take namespace into account in caching
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
7
"2021-09-17T16:57:33"
"2021-12-17T10:52:18"
"2021-09-29T13:01:31"
MEMBER
null
Loading a dataset "username/dataset_name" hosted by a user on the hub used to cache the dataset only taking into account the dataset name, and ignorign the username. Because of this, if a user later loads "dataset_name" without specifying the username, it would reload the dataset from the cache instead of failing. I changed the dataset cache and module cache mechanism to include the username in the name of the cache directory that is used: <s> `~/.cache/huggingface/datasets/username/dataset_name` for the data `~/.cache/huggingface/modules/datasets_modules/datasets/username/dataset_name` for the python files </s> EDIT: actually using three underscores: `~/.cache/huggingface/datasets/username___dataset_name` for the data `~/.cache/huggingface/modules/datasets_modules/datasets/username___dataset_name` for the python files This PR should fix the issue https://github.com/huggingface/datasets/issues/2842 cc @stas00
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2938/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2938/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2938", "html_url": "https://github.com/huggingface/datasets/pull/2938", "diff_url": "https://github.com/huggingface/datasets/pull/2938.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2938.patch", "merged_at": "2021-09-29T13:01:31" }
https://api.github.com/repos/huggingface/datasets/issues/2937
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2937/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2937/comments
https://api.github.com/repos/huggingface/datasets/issues/2937/events
https://github.com/huggingface/datasets/issues/2937
999,548,277
I_kwDODunzps47k-V1
2,937
load_dataset using default cache on Windows causes PermissionError: [WinError 5] Access is denied
{ "login": "daqieq", "id": 40532020, "node_id": "MDQ6VXNlcjQwNTMyMDIw", "avatar_url": "https://avatars.githubusercontent.com/u/40532020?v=4", "gravatar_id": "", "url": "https://api.github.com/users/daqieq", "html_url": "https://github.com/daqieq", "followers_url": "https://api.github.com/users/daqieq/followers", "following_url": "https://api.github.com/users/daqieq/following{/other_user}", "gists_url": "https://api.github.com/users/daqieq/gists{/gist_id}", "starred_url": "https://api.github.com/users/daqieq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/daqieq/subscriptions", "organizations_url": "https://api.github.com/users/daqieq/orgs", "repos_url": "https://api.github.com/users/daqieq/repos", "events_url": "https://api.github.com/users/daqieq/events{/privacy}", "received_events_url": "https://api.github.com/users/daqieq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
3
"2021-09-17T16:52:10"
"2022-01-29T03:49:30"
null
NONE
null
## Describe the bug Standard process to download and load the wiki_bio dataset causes PermissionError in Windows 10 and 11. ## Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset('wiki_bio') ``` ## Expected results It is expected that the dataset downloads without any errors. ## Actual results PermissionError see trace below: ``` Using custom data configuration default Downloading and preparing dataset wiki_bio/default (download: 318.53 MiB, generated: 736.94 MiB, post-processed: Unknown size, total: 1.03 GiB) to C:\Users\username\.cache\huggingface\datasets\wiki_bio\default\1.1.0\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\load.py", line 1112, in load_dataset builder_instance.download_and_prepare( File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\builder.py", line 644, in download_and_prepare self._save_info() File "C:\Users\username\.conda\envs\hf\lib\contextlib.py", line 120, in __exit__ next(self.gen) File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\builder.py", line 598, in incomplete_dir os.rename(tmp_dir, dirname) PermissionError: [WinError 5] Access is denied: 'C:\\Users\\username\\.cache\\huggingface\\datasets\\wiki_bio\\default\\1.1.0\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9.incomplete' -> 'C:\\Users\\username\\.cache\\huggingface\\datasets\\wiki_bio\\default\\1.1.0\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9' ``` By commenting out the os.rename() [L604](https://github.com/huggingface/datasets/blob/master/src/datasets/builder.py#L604) and the shutil.rmtree() [L607](https://github.com/huggingface/datasets/blob/master/src/datasets/builder.py#L607) lines, in my virtual environment, I was able to get the load process to complete, rename the directory manually and then rerun the `load_dataset('wiki_bio')` to get what I needed. It seems that os.rename() in the `incomplete_dir` content manager is the culprit. Here's another project [Conan](https://github.com/conan-io/conan/issues/6560) with similar issue with os.rename() if it helps debug this issue. ## Environment info - `datasets` version: 1.12.1 - Platform: Windows-10-10.0.22449-SP0 - Python version: 3.8.12 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2937/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2937/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2936
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2936/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2936/comments
https://api.github.com/repos/huggingface/datasets/issues/2936/events
https://github.com/huggingface/datasets/pull/2936
999,521,647
PR_kwDODunzps4r5knb
2,936
Check that array is not Float as nan != nan
{ "login": "Iwontbecreative", "id": 494951, "node_id": "MDQ6VXNlcjQ5NDk1MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/494951?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Iwontbecreative", "html_url": "https://github.com/Iwontbecreative", "followers_url": "https://api.github.com/users/Iwontbecreative/followers", "following_url": "https://api.github.com/users/Iwontbecreative/following{/other_user}", "gists_url": "https://api.github.com/users/Iwontbecreative/gists{/gist_id}", "starred_url": "https://api.github.com/users/Iwontbecreative/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Iwontbecreative/subscriptions", "organizations_url": "https://api.github.com/users/Iwontbecreative/orgs", "repos_url": "https://api.github.com/users/Iwontbecreative/repos", "events_url": "https://api.github.com/users/Iwontbecreative/events{/privacy}", "received_events_url": "https://api.github.com/users/Iwontbecreative/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2021-09-17T16:16:41"
"2021-09-21T09:39:05"
"2021-09-21T09:39:04"
CONTRIBUTOR
null
The Exception wants to check for issues with StructArrays/ListArrays but catches FloatArrays with value nan as nan != nan. Pass on FloatArrays as we should not raise an Exception for them.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2936/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2936/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2936", "html_url": "https://github.com/huggingface/datasets/pull/2936", "diff_url": "https://github.com/huggingface/datasets/pull/2936.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2936.patch", "merged_at": "2021-09-21T09:39:04" }
https://api.github.com/repos/huggingface/datasets/issues/2935
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2935/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2935/comments
https://api.github.com/repos/huggingface/datasets/issues/2935/events
https://github.com/huggingface/datasets/pull/2935
999,518,469
PR_kwDODunzps4r5j8B
2,935
Add Jigsaw unintended Bias
{ "login": "Iwontbecreative", "id": 494951, "node_id": "MDQ6VXNlcjQ5NDk1MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/494951?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Iwontbecreative", "html_url": "https://github.com/Iwontbecreative", "followers_url": "https://api.github.com/users/Iwontbecreative/followers", "following_url": "https://api.github.com/users/Iwontbecreative/following{/other_user}", "gists_url": "https://api.github.com/users/Iwontbecreative/gists{/gist_id}", "starred_url": "https://api.github.com/users/Iwontbecreative/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Iwontbecreative/subscriptions", "organizations_url": "https://api.github.com/users/Iwontbecreative/orgs", "repos_url": "https://api.github.com/users/Iwontbecreative/repos", "events_url": "https://api.github.com/users/Iwontbecreative/events{/privacy}", "received_events_url": "https://api.github.com/users/Iwontbecreative/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
"2021-09-17T16:12:31"
"2021-09-24T10:41:52"
"2021-09-24T10:41:52"
CONTRIBUTOR
null
Hi, Here's a first attempt at this dataset. Would be great if it could be merged relatively quickly as it is needed for Bigscience-related stuff. This requires manual download, and I had some trouble generating dummy_data in this setting, so welcoming feedback there.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2935/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2935/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2935", "html_url": "https://github.com/huggingface/datasets/pull/2935", "diff_url": "https://github.com/huggingface/datasets/pull/2935.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2935.patch", "merged_at": "2021-09-24T10:41:52" }
https://api.github.com/repos/huggingface/datasets/issues/2934
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2934/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2934/comments
https://api.github.com/repos/huggingface/datasets/issues/2934/events
https://github.com/huggingface/datasets/issues/2934
999,477,413
I_kwDODunzps47ktCl
2,934
to_tf_dataset keeps a reference to the open data somewhere, causing issues on windows
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
"2021-09-17T15:26:53"
"2021-10-13T09:03:23"
"2021-10-13T09:03:23"
MEMBER
null
To reproduce: ```python import datasets as ds import weakref import gc d = ds.load_dataset("mnist", split="train") ref = weakref.ref(d._data.table) tfd = d.to_tf_dataset("image", batch_size=1, shuffle=False, label_cols="label") del tfd, d gc.collect() assert ref() is None, "Error: there is at least one reference left" ``` This causes issues because the table holds a reference to an open arrow file that should be closed. So on windows it's not possible to delete or move the arrow file afterwards. Moreover the CI test of the `to_tf_dataset` method isn't able to clean up the temporary arrow files because of this. cc @Rocketknight1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2934/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2934/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2933
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2933/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2933/comments
https://api.github.com/repos/huggingface/datasets/issues/2933/events
https://github.com/huggingface/datasets/pull/2933
999,392,566
PR_kwDODunzps4r5MHs
2,933
Replace script_version with revision
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2021-09-17T14:04:39"
"2021-09-20T09:52:10"
"2021-09-20T09:52:10"
MEMBER
null
As discussed in https://github.com/huggingface/datasets/pull/2718#discussion_r707013278, the parameter name `script_version` is no longer applicable to datasets without loading script (i.e., datasets only with raw data files). This PR replaces the parameter name `script_version` with `revision`. This way, we are also aligned with: - Transformers: `AutoTokenizer.from_pretrained(..., revision=...)` - Hub: `HfApi.dataset_info(..., revision=...)`, `HfApi.upload_file(..., revision=...)`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2933/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2933/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2933", "html_url": "https://github.com/huggingface/datasets/pull/2933", "diff_url": "https://github.com/huggingface/datasets/pull/2933.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2933.patch", "merged_at": "2021-09-20T09:52:10" }
https://api.github.com/repos/huggingface/datasets/issues/2932
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2932/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2932/comments
https://api.github.com/repos/huggingface/datasets/issues/2932/events
https://github.com/huggingface/datasets/issues/2932
999,317,750
I_kwDODunzps47kGD2
2,932
Conda build fails
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
2
"2021-09-17T12:49:22"
"2021-09-21T15:31:10"
"2021-09-21T15:31:10"
MEMBER
null
## Describe the bug Current `datasets` version in conda is 1.9 instead of 1.12. The build of the conda package fails.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2932/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2932/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2931
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2931/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2931/comments
https://api.github.com/repos/huggingface/datasets/issues/2931/events
https://github.com/huggingface/datasets/pull/2931
998,326,359
PR_kwDODunzps4r1-JH
2,931
Fix bug in to_tf_dataset
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2021-09-16T15:08:03"
"2021-09-16T17:01:38"
"2021-09-16T17:01:37"
MEMBER
null
Replace `set_format()` to `with_format()` so that we don't alter the original dataset in `to_tf_dataset()`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2931/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2931/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2931", "html_url": "https://github.com/huggingface/datasets/pull/2931", "diff_url": "https://github.com/huggingface/datasets/pull/2931.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2931.patch", "merged_at": "2021-09-16T17:01:37" }
https://api.github.com/repos/huggingface/datasets/issues/2930
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2930/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2930/comments
https://api.github.com/repos/huggingface/datasets/issues/2930/events
https://github.com/huggingface/datasets/issues/2930
998,154,311
I_kwDODunzps47fqBH
2,930
Mutable columns argument breaks set_format
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[ { "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }, { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
1
"2021-09-16T12:27:22"
"2021-09-16T13:50:53"
"2021-09-16T13:50:53"
MEMBER
null
## Describe the bug If you pass a mutable list to the `columns` argument of `set_format` and then change the list afterwards, the returned columns also change. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("glue", "cola") column_list = ["idx", "label"] dataset.set_format("python", columns=column_list) column_list[1] = "foo" # Change the list after we call `set_format` dataset['train'][:4].keys() ``` ## Expected results ```python dict_keys(['idx', 'label']) ``` ## Actual results ```python dict_keys(['idx']) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2930/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2930/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2929
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2929/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2929/comments
https://api.github.com/repos/huggingface/datasets/issues/2929/events
https://github.com/huggingface/datasets/pull/2929
997,960,024
PR_kwDODunzps4r015C
2,929
Add regression test for null Sequence
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2021-09-16T08:58:33"
"2021-09-17T08:23:59"
"2021-09-17T08:23:59"
MEMBER
null
Relates to #2892 and #2900.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2929/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2929/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2929", "html_url": "https://github.com/huggingface/datasets/pull/2929", "diff_url": "https://github.com/huggingface/datasets/pull/2929.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2929.patch", "merged_at": "2021-09-17T08:23:59" }
https://api.github.com/repos/huggingface/datasets/issues/2928
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2928/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2928/comments
https://api.github.com/repos/huggingface/datasets/issues/2928/events
https://github.com/huggingface/datasets/pull/2928
997,941,506
PR_kwDODunzps4r0yUb
2,928
Update BibTeX entry
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2021-09-16T08:39:20"
"2021-09-16T12:35:34"
"2021-09-16T12:35:34"
MEMBER
null
Update BibTeX entry.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2928/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2928/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2928", "html_url": "https://github.com/huggingface/datasets/pull/2928", "diff_url": "https://github.com/huggingface/datasets/pull/2928.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2928.patch", "merged_at": "2021-09-16T12:35:34" }
https://api.github.com/repos/huggingface/datasets/issues/2927
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2927/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2927/comments
https://api.github.com/repos/huggingface/datasets/issues/2927/events
https://github.com/huggingface/datasets/issues/2927
997,654,680
I_kwDODunzps47dwCY
2,927
Datasets 1.12 dataset.filter TypeError: get_indices_from_mask_function() got an unexpected keyword argument
{ "login": "timothyjlaurent", "id": 2000204, "node_id": "MDQ6VXNlcjIwMDAyMDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/timothyjlaurent", "html_url": "https://github.com/timothyjlaurent", "followers_url": "https://api.github.com/users/timothyjlaurent/followers", "following_url": "https://api.github.com/users/timothyjlaurent/following{/other_user}", "gists_url": "https://api.github.com/users/timothyjlaurent/gists{/gist_id}", "starred_url": "https://api.github.com/users/timothyjlaurent/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/timothyjlaurent/subscriptions", "organizations_url": "https://api.github.com/users/timothyjlaurent/orgs", "repos_url": "https://api.github.com/users/timothyjlaurent/repos", "events_url": "https://api.github.com/users/timothyjlaurent/events{/privacy}", "received_events_url": "https://api.github.com/users/timothyjlaurent/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
2
"2021-09-16T01:14:02"
"2021-09-20T16:23:22"
"2021-09-20T16:23:21"
NONE
null
## Describe the bug Upgrading to 1.12 caused `dataset.filter` call to fail with > get_indices_from_mask_function() got an unexpected keyword argument valid_rel_labels ## Steps to reproduce the bug ```pythondef filter_good_rows( ex: Dict, valid_rel_labels: Set[str], valid_ner_labels: Set[str], tokenizer: PreTrainedTokenizerFast, ) -> bool: """Get the good rows""" encoding = get_encoding_for_text(text=ex["text"], tokenizer=tokenizer) ex["encoding"] = encoding for relation in ex["relations"]: if not is_valid_relation(relation, valid_rel_labels): return False for span in ex["spans"]: if not is_valid_span(span, valid_ner_labels, encoding): return False return True def get_dataset(): loader_path = str(Path(__file__).parent / "prodigy_dataset_builder.py") ds = load_dataset( loader_path, name="prodigy-dataset", data_files=sorted(file_paths), cache_dir=cache_dir, )["train"] valid_ner_labels = set(vocab.ner_category) valid_relations = set(vocab.relation_types.keys()) ds = ds.filter( filter_good_rows, fn_kwargs=dict( valid_rel_labels=valid_relations, valid_ner_labels=valid_ner_labels, tokenizer=vocab.tokenizer, ), keep_in_memory=True, num_proc=num_proc, ) ``` `ds` is a `DatasetDict` produced by a jsonl dataset. This runs fine on 1.11 but fails on 1.12 **Stack Trace** ## Expected results I expect 1.12 datasets filter to filter the dataset without raising as it does on 1.11 ## Actual results ``` tf_ner_rel_lib/dataset.py:695: in load_prodigy_arrow_datasets_from_jsonl ds = ds.filter( ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:185: in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/fingerprint.py:398: in wrapper out = func(self, *args, **kwargs) ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:2169: in filter indices = self.map( ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:1686: in map return self._map_single( ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:185: in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/fingerprint.py:398: in wrapper out = func(self, *args, **kwargs) ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:2048: in _map_single batch = apply_function_on_filtered_inputs( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ inputs = {'_input_hash': [2108817714, 1477695082, -1021597032, 2130671338, -1260483858, -1203431639, ...], '_task_hash': [18070...ons', 'relations', 'relations', ...], 'answer': ['accept', 'accept', 'accept', 'accept', 'accept', 'accept', ...], ...} indices = [0, 1, 2, 3, 4, 5, ...], check_same_num_examples = False, offset = 0 def apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples=False, offset=0): """Utility to apply the function on a selection of columns.""" nonlocal update_data fn_args = [inputs] if input_columns is None else [inputs[col] for col in input_columns] if offset == 0: effective_indices = indices else: effective_indices = [i + offset for i in indices] if isinstance(indices, list) else indices + offset processed_inputs = ( > function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) ) E TypeError: get_indices_from_mask_function() got an unexpected keyword argument 'valid_rel_labels' ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:1939: TypeError ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.12.1 - Platform: Mac - Python version: 3.8.9 - PyArrow version: pyarrow==5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2927/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2927/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2926
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2926/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2926/comments
https://api.github.com/repos/huggingface/datasets/issues/2926/events
https://github.com/huggingface/datasets/issues/2926
997,463,277
I_kwDODunzps47dBTt
2,926
Error when downloading datasets to non-traditional cache directories
{ "login": "dar-tau", "id": 45885627, "node_id": "MDQ6VXNlcjQ1ODg1NjI3", "avatar_url": "https://avatars.githubusercontent.com/u/45885627?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dar-tau", "html_url": "https://github.com/dar-tau", "followers_url": "https://api.github.com/users/dar-tau/followers", "following_url": "https://api.github.com/users/dar-tau/following{/other_user}", "gists_url": "https://api.github.com/users/dar-tau/gists{/gist_id}", "starred_url": "https://api.github.com/users/dar-tau/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dar-tau/subscriptions", "organizations_url": "https://api.github.com/users/dar-tau/orgs", "repos_url": "https://api.github.com/users/dar-tau/repos", "events_url": "https://api.github.com/users/dar-tau/events{/privacy}", "received_events_url": "https://api.github.com/users/dar-tau/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
1
"2021-09-15T19:59:46"
"2021-11-24T21:42:31"
null
NONE
null
## Describe the bug When the cache directory is linked (soft link) to a directory on a NetApp device, the download fails. ## Steps to reproduce the bug ```bash ln -s /path/to/netapp/.cache ~/.cache ``` ```python load_dataset("imdb") ``` ## Expected results Successfully loading IMDB dataset ## Actual results ``` datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=33432835, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='imdb')}, {'expected': SplitInfo(name='test', num_bytes=32650697, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='test', num_bytes=659932, num_examples=503, dataset_name='imdb')}, {'expected': SplitInfo(name='unsupervised', num_bytes=67106814, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')}] ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.1.2 - Platform: Ubuntu - Python version: 3.8 ## Extra notes Stranger yet, trying to debug the phenomenon, I found the range of results to vary a lot without clear direction: - With `cache_dir="/path/to/netapp/.cache"` the same thing happens. - However, when linking `~/netapp/` to `/path/to/netapp` *and* setting `cache_dir="~/netapp/.cache/huggingface/datasets"` - it does work - On the other hand, when linking `~/.cache` to `~/netapp/.cache` without using `cache_dir`, it does work anymore. While I could test it only for a NetApp device, it might have to do with any other mounted FS. Thanks :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2926/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2926/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2925
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2925/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2925/comments
https://api.github.com/repos/huggingface/datasets/issues/2925/events
https://github.com/huggingface/datasets/pull/2925
997,407,034
PR_kwDODunzps4rzJ9s
2,925
Add tutorial for no-code dataset upload
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
3
"2021-09-15T18:54:42"
"2021-09-27T17:51:55"
"2021-09-27T17:51:55"
MEMBER
null
This PR is for a tutorial for uploading a dataset to the Hub. It relies on the Hub UI elements to upload a dataset, introduces the online tagging tool for creating tags, and the Dataset card template to get a head start on filling it out. The addition of this tutorial should make it easier for beginners to upload a dataset without accessing the terminal or knowing Git.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2925/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2925/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2925", "html_url": "https://github.com/huggingface/datasets/pull/2925", "diff_url": "https://github.com/huggingface/datasets/pull/2925.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2925.patch", "merged_at": "2021-09-27T17:51:55" }
https://api.github.com/repos/huggingface/datasets/issues/2924
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2924/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2924/comments
https://api.github.com/repos/huggingface/datasets/issues/2924/events
https://github.com/huggingface/datasets/issues/2924
997,378,113
I_kwDODunzps47cshB
2,924
"File name too long" error for file locks
{ "login": "gar1t", "id": 184949, "node_id": "MDQ6VXNlcjE4NDk0OQ==", "avatar_url": "https://avatars.githubusercontent.com/u/184949?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gar1t", "html_url": "https://github.com/gar1t", "followers_url": "https://api.github.com/users/gar1t/followers", "following_url": "https://api.github.com/users/gar1t/following{/other_user}", "gists_url": "https://api.github.com/users/gar1t/gists{/gist_id}", "starred_url": "https://api.github.com/users/gar1t/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gar1t/subscriptions", "organizations_url": "https://api.github.com/users/gar1t/orgs", "repos_url": "https://api.github.com/users/gar1t/repos", "events_url": "https://api.github.com/users/gar1t/events{/privacy}", "received_events_url": "https://api.github.com/users/gar1t/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
9
"2021-09-15T18:16:50"
"2021-10-29T09:42:24"
"2021-10-29T09:42:24"
NONE
null
## Describe the bug Getting the following error when calling `load_dataset("gar1t/test")`: ``` OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock' ``` ## Steps to reproduce the bug Where the user cache dir (e.g. `~/.cache`) is on a file system that limits filenames to 255 chars (e.g. ext4): ```python from datasets import load_dataset load_dataset("gar1t/test") ``` ## Expected results Expect the function to return without an error. ## Actual results ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<python_venv>/lib/python3.9/site-packages/datasets/load.py", line 1112, in load_dataset builder_instance.download_and_prepare( File "<python_venv>/lib/python3.9/site-packages/datasets/builder.py", line 644, in download_and_prepare self._save_info() File "<python_venv>/lib/python3.9/site-packages/datasets/builder.py", line 765, in _save_info with FileLock(lock_path): File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 323, in __enter__ self.acquire() File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 272, in acquire self._acquire() File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 403, in _acquire fd = os.open(self._lock_file, open_mode) OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock' ``` ## Environment info - `datasets` version: 1.12.1 - Platform: Linux-5.11.0-27-generic-x86_64-with-glibc2.31 - Python version: 3.9.7 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2924/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 1, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2924/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2923
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2923/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2923/comments
https://api.github.com/repos/huggingface/datasets/issues/2923/events
https://github.com/huggingface/datasets/issues/2923
997,351,590
I_kwDODunzps47cmCm
2,923
Loading an autonlp dataset raises in normal mode but not in streaming mode
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
open
false
null
[]
null
0
"2021-09-15T17:44:38"
"2021-10-22T09:36:09"
null
CONTRIBUTOR
null
## Describe the bug The same dataset (from autonlp) raises an error in normal mode, but does not raise in streaming mode ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("severo/autonlp-data-sentiment_detection-3c8bcd36", split="train", streaming=False) ## raises an error load_dataset("severo/autonlp-data-sentiment_detection-3c8bcd36", split="train", streaming=True) ## does not raise an error ``` ## Expected results Both calls should raise the same error ## Actual results Call with streaming=False: ``` 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 5825.42it/s] Using custom data configuration autonlp-data-sentiment_detection-3c8bcd36-fe30267462d1d42b Downloading and preparing dataset json/autonlp-data-sentiment_detection-3c8bcd36 to /home/slesage/.cache/huggingface/datasets/json/autonlp-data-sentiment_detection-3c8bcd36-fe30267462d1d42b/0.0.0/d75ead8d5cfcbe67495df0f89bd262f0023257fbbbd94a730313295f3d756d50... 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 15923.71it/s] 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 3346.88it/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset builder_instance.download_and_prepare( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare self._download_and_prepare( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 726, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 1187, in _prepare_split writer.write_table(table) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/arrow_writer.py", line 418, in write_table pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/arrow_writer.py", line 418, in <listcomp> pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema) File "pyarrow/table.pxi", line 1249, in pyarrow.lib.Table.__getitem__ File "pyarrow/table.pxi", line 1825, in pyarrow.lib.Table.column File "pyarrow/table.pxi", line 1800, in pyarrow.lib.Table._ensure_integer_index KeyError: 'Field "splits" does not exist in table schema' ``` Call with `streaming=False`: ``` 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 6000.43it/s] Using custom data configuration autonlp-data-sentiment_detection-3c8bcd36-fe30267462d1d42b 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 46916.15it/s] 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 148734.18it/s] ``` ## Environment info - `datasets` version: 1.12.1.dev0 - Platform: Linux-5.11.0-1017-aws-x86_64-with-glibc2.29 - Python version: 3.8.11 - PyArrow version: 4.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2923/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2923/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2922
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2922/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2922/comments
https://api.github.com/repos/huggingface/datasets/issues/2922/events
https://github.com/huggingface/datasets/pull/2922
997,332,662
PR_kwDODunzps4ry6-s
2,922
Fix conversion of multidim arrays in list to arrow
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2021-09-15T17:21:36"
"2021-09-15T17:22:52"
"2021-09-15T17:21:45"
MEMBER
null
Arrow only supports 1-dim arrays. Previously we were converting all the numpy arrays to python list before instantiating arrow arrays to workaround this limitation. However in #2361 we started to keep numpy arrays in order to keep their dtypes. It works when we pass any multi-dim numpy array (the conversion to arrow has been added on our side), but not for lists of multi-dim numpy arrays. In this PR I added two strategies: - one that takes a list of multi-dim numpy arrays on returns an arrow array in an optimized way (more common case) - one that takes a list of possibly very nested data (lists, dicts, tuples) containing multi-dim arrays. This one is less optimized since it converts all the multi-dim numpy arrays into lists of 1-d arrays for compatibility with arrow. This strategy is simpler that just trying to create the arrow array from a possibly very nested data structure, but in the future we can improve it if needed. Fix https://github.com/huggingface/datasets/issues/2921
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2922/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2922/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2922", "html_url": "https://github.com/huggingface/datasets/pull/2922", "diff_url": "https://github.com/huggingface/datasets/pull/2922.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2922.patch", "merged_at": "2021-09-15T17:21:45" }
https://api.github.com/repos/huggingface/datasets/issues/2921
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2921/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2921/comments
https://api.github.com/repos/huggingface/datasets/issues/2921/events
https://github.com/huggingface/datasets/issues/2921
997,325,424
I_kwDODunzps47cfpw
2,921
Using a list of multi-dim numpy arrays raises an error "can only convert 1-dimensional array values"
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2021-09-15T17:12:11"
"2021-09-15T17:21:45"
"2021-09-15T17:21:45"
MEMBER
null
This error has been introduced in https://github.com/huggingface/datasets/pull/2361 To reproduce: ```python import numpy as np from datasets import Dataset d = Dataset.from_dict({"a": [np.zeros((2, 2))]}) ``` raises ```python Traceback (most recent call last): File "playground/ttest.py", line 5, in <module> d = Dataset.from_dict({"a": [np.zeros((2, 2))]}).with_format("torch") File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/arrow_dataset.py", line 458, in from_dict pa_table = InMemoryTable.from_pydict(mapping=mapping) File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/table.py", line 365, in from_pydict return cls(pa.Table.from_pydict(*args, **kwargs)) File "pyarrow/table.pxi", line 1639, in pyarrow.lib.Table.from_pydict File "pyarrow/array.pxi", line 332, in pyarrow.lib.asarray File "pyarrow/array.pxi", line 223, in pyarrow.lib.array File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/arrow_writer.py", line 107, in __arrow_array__ out = pa.array(self.data, type=type) File "pyarrow/array.pxi", line 306, in pyarrow.lib.array File "pyarrow/array.pxi", line 39, in pyarrow.lib._sequence_to_array File "pyarrow/error.pxi", line 143, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Can only convert 1-dimensional array values
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2921/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2921/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2920
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2920/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2920/comments
https://api.github.com/repos/huggingface/datasets/issues/2920/events
https://github.com/huggingface/datasets/pull/2920
997,323,014
PR_kwDODunzps4ry4_u
2,920
Fix unwanted tqdm bar when accessing examples
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2021-09-15T17:09:11"
"2021-09-15T17:18:24"
"2021-09-15T17:18:24"
MEMBER
null
A change in #2814 added bad progress bars in `map_nested`. Now they're disabled by default Fix #2919
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2920/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2920/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2920", "html_url": "https://github.com/huggingface/datasets/pull/2920", "diff_url": "https://github.com/huggingface/datasets/pull/2920.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2920.patch", "merged_at": "2021-09-15T17:18:23" }
https://api.github.com/repos/huggingface/datasets/issues/2919
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2919/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2919/comments
https://api.github.com/repos/huggingface/datasets/issues/2919/events
https://github.com/huggingface/datasets/issues/2919
997,127,487
I_kwDODunzps47bvU_
2,919
Unwanted progress bars when accessing examples
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
1
"2021-09-15T14:05:10"
"2021-09-15T17:21:49"
"2021-09-15T17:18:23"
MEMBER
null
When accessing examples from a dataset formatted for pytorch, some progress bars appear when accessing examples: ```python In [1]: import datasets as ds In [2]: d = ds.Dataset.from_dict({"a": [0, 1, 2]}).with_format("torch") In [3]: d[0] 100%|████████████████████████████████| 1/1 [00:00<00:00, 3172.70it/s] Out[3]: {'a': tensor(0)} ``` This is because the pytorch formatter calls `map_nested` that uses progress bars cc @sgugger
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2919/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2919/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2918
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2918/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2918/comments
https://api.github.com/repos/huggingface/datasets/issues/2918/events
https://github.com/huggingface/datasets/issues/2918
997,063,347
I_kwDODunzps47bfqz
2,918
`Can not decode content-encoding: gzip` when loading `scitldr` dataset with streaming
{ "login": "SBrandeis", "id": 33657802, "node_id": "MDQ6VXNlcjMzNjU3ODAy", "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SBrandeis", "html_url": "https://github.com/SBrandeis", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "repos_url": "https://api.github.com/users/SBrandeis/repos", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 3287858981, "node_id": "MDU6TGFiZWwzMjg3ODU4OTgx", "url": "https://api.github.com/repos/huggingface/datasets/labels/streaming", "name": "streaming", "color": "fef2c0", "default": false, "description": "" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
3
"2021-09-15T13:06:07"
"2021-12-01T08:15:00"
"2021-12-01T08:15:00"
CONTRIBUTOR
null
## Describe the bug Trying to load the `"FullText"` config of the `"scitldr"` dataset with `streaming=True` raises an error from `aiohttp`: ```python ClientPayloadError: 400, message='Can not decode content-encoding: gzip' ``` cc @lhoestq ## Steps to reproduce the bug ```python from datasets import load_dataset iter_dset = iter( load_dataset("scitldr", name="FullText", split="test", streaming=True) ) next(iter_dset) ``` ## Expected results Returns the first sample of the dataset ## Actual results Calling `__next__` crashes with the following Traceback: ```python ----> 1 next(dset_iter) ~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in __iter__(self) 339 340 def __iter__(self): --> 341 for key, example in self._iter(): 342 if self.features: 343 # we encode the example for ClassLabel feature types for example ~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in _iter(self) 336 else: 337 ex_iterable = self._ex_iterable --> 338 yield from ex_iterable 339 340 def __iter__(self): ~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in __iter__(self) 76 77 def __iter__(self): ---> 78 for key, example in self.generate_examples_fn(**self.kwargs): 79 yield key, example 80 ~\.cache\huggingface\modules\datasets_modules\datasets\scitldr\72d6e2195786c57e1d343066fb2cc4f93ea39c5e381e53e6ae7c44bbfd1f05ef\scitldr.py in _generate_examples(self, filepath, split) 162 163 with open(filepath, encoding="utf-8") as f: --> 164 for id_, row in enumerate(f): 165 data = json.loads(row) 166 if self.config.name == "AIC": ~\miniconda3\envs\datasets\lib\site-packages\fsspec\implementations\http.py in read(self, length) 496 else: 497 length = min(self.size - self.loc, length) --> 498 return super().read(length) 499 500 async def async_fetch_all(self): ~\miniconda3\envs\datasets\lib\site-packages\fsspec\spec.py in read(self, length) 1481 # don't even bother calling fetch 1482 return b"" -> 1483 out = self.cache._fetch(self.loc, self.loc + length) 1484 self.loc += len(out) 1485 return out ~\miniconda3\envs\datasets\lib\site-packages\fsspec\caching.py in _fetch(self, start, end) 378 elif start < self.start: 379 if self.end - end > self.blocksize: --> 380 self.cache = self.fetcher(start, bend) 381 self.start = start 382 else: ~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in wrapper(*args, **kwargs) 86 def wrapper(*args, **kwargs): 87 self = obj or args[0] ---> 88 return sync(self.loop, func, *args, **kwargs) 89 90 return wrapper ~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in sync(loop, func, timeout, *args, **kwargs) 67 raise FSTimeoutError 68 if isinstance(result[0], BaseException): ---> 69 raise result[0] 70 return result[0] 71 ~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in _runner(event, coro, result, timeout) 23 coro = asyncio.wait_for(coro, timeout=timeout) 24 try: ---> 25 result[0] = await coro 26 except Exception as ex: 27 result[0] = ex ~\miniconda3\envs\datasets\lib\site-packages\fsspec\implementations\http.py in async_fetch_range(self, start, end) 538 if r.status == 206: 539 # partial content, as expected --> 540 out = await r.read() 541 elif "Content-Length" in r.headers: 542 cl = int(r.headers["Content-Length"]) ~\miniconda3\envs\datasets\lib\site-packages\aiohttp\client_reqrep.py in read(self) 1030 if self._body is None: 1031 try: -> 1032 self._body = await self.content.read() 1033 for trace in self._traces: 1034 await trace.send_response_chunk_received( ~\miniconda3\envs\datasets\lib\site-packages\aiohttp\streams.py in read(self, n) 342 async def read(self, n: int = -1) -> bytes: 343 if self._exception is not None: --> 344 raise self._exception 345 346 # migration problem; with DataQueue you have to catch ClientPayloadError: 400, message='Can not decode content-encoding: gzip' ``` ## Environment info - `datasets` version: 1.12.0 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.8.5 - PyArrow version: 2.0.0 - aiohttp version: 3.7.4.post0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2918/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2918/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2917
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2917/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2917/comments
https://api.github.com/repos/huggingface/datasets/issues/2917/events
https://github.com/huggingface/datasets/issues/2917
997,041,658
I_kwDODunzps47baX6
2,917
windows download abnormal
{ "login": "wei1826676931", "id": 52347799, "node_id": "MDQ6VXNlcjUyMzQ3Nzk5", "avatar_url": "https://avatars.githubusercontent.com/u/52347799?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wei1826676931", "html_url": "https://github.com/wei1826676931", "followers_url": "https://api.github.com/users/wei1826676931/followers", "following_url": "https://api.github.com/users/wei1826676931/following{/other_user}", "gists_url": "https://api.github.com/users/wei1826676931/gists{/gist_id}", "starred_url": "https://api.github.com/users/wei1826676931/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wei1826676931/subscriptions", "organizations_url": "https://api.github.com/users/wei1826676931/orgs", "repos_url": "https://api.github.com/users/wei1826676931/repos", "events_url": "https://api.github.com/users/wei1826676931/events{/privacy}", "received_events_url": "https://api.github.com/users/wei1826676931/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
3
"2021-09-15T12:45:35"
"2021-09-16T17:17:48"
"2021-09-16T17:17:48"
NONE
null
## Describe the bug The script clearly exists (accessible from the browser), but the script download fails on windows. Then I tried it again and it can be downloaded normally on linux. why?? ## Steps to reproduce the bug ```python3.7 + windows ![image](https://user-images.githubusercontent.com/52347799/133436174-4303f847-55d5-434f-a749-08da3bb9b654.png) # Sample code to reproduce the bug ``` ## Expected results It can be downloaded normally. ## Actual results it cann't ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version:1.11.0 - Platform:windows - Python version:3.7 - PyArrow version:
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2917/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2917/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2916
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2916/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2916/comments
https://api.github.com/repos/huggingface/datasets/issues/2916/events
https://github.com/huggingface/datasets/pull/2916
997,003,661
PR_kwDODunzps4rx5ua
2,916
Add OpenAI's pass@k code evaluation metric
{ "login": "lvwerra", "id": 8264887, "node_id": "MDQ6VXNlcjgyNjQ4ODc=", "avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lvwerra", "html_url": "https://github.com/lvwerra", "followers_url": "https://api.github.com/users/lvwerra/followers", "following_url": "https://api.github.com/users/lvwerra/following{/other_user}", "gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}", "starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions", "organizations_url": "https://api.github.com/users/lvwerra/orgs", "repos_url": "https://api.github.com/users/lvwerra/repos", "events_url": "https://api.github.com/users/lvwerra/events{/privacy}", "received_events_url": "https://api.github.com/users/lvwerra/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
4
"2021-09-15T12:05:43"
"2021-11-12T14:19:51"
"2021-11-12T14:19:50"
CONTRIBUTOR
null
This PR introduces the `code_eval` metric which implements [OpenAI's code evaluation harness](https://github.com/openai/human-eval) introduced in the [Codex paper](https://arxiv.org/abs/2107.03374). It is heavily based on the original implementation and just adapts the interface to follow the `predictions`/`references` convention. The addition of this metric should enable the evaluation against the code evaluation datasets added in #2897 and #2893. A few open questions: - The implementation makes heavy use of multiprocessing which this PR does not touch. Is this conflicting with multiprocessing natively integrated in `datasets`? - This metric executes generated Python code and as such it poses dangers of executing malicious code. OpenAI addresses this issue by 1) commenting the `exec` call in the code so the user has to actively uncomment it and read the warning and 2) suggests using a sandbox environment (gVisor container). Should we add a similar safeguard? E.g. a prompt that needs to be answered when initialising the metric? Or at least a warning message? - Naming: the implementation sticks to the `predictions`/`references` naming, however, the references are not reference solutions but unittest to test the solution. While reference solutions are also available they are not used. Should the naming be adapted?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2916/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2916/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2916", "html_url": "https://github.com/huggingface/datasets/pull/2916", "diff_url": "https://github.com/huggingface/datasets/pull/2916.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2916.patch", "merged_at": "2021-11-12T14:19:50" }
https://api.github.com/repos/huggingface/datasets/issues/2915
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2915/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2915/comments
https://api.github.com/repos/huggingface/datasets/issues/2915/events
https://github.com/huggingface/datasets/pull/2915
996,870,071
PR_kwDODunzps4rxfWb
2,915
Fix fsspec AbstractFileSystem access
{ "login": "pierre-godard", "id": 3969168, "node_id": "MDQ6VXNlcjM5NjkxNjg=", "avatar_url": "https://avatars.githubusercontent.com/u/3969168?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pierre-godard", "html_url": "https://github.com/pierre-godard", "followers_url": "https://api.github.com/users/pierre-godard/followers", "following_url": "https://api.github.com/users/pierre-godard/following{/other_user}", "gists_url": "https://api.github.com/users/pierre-godard/gists{/gist_id}", "starred_url": "https://api.github.com/users/pierre-godard/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pierre-godard/subscriptions", "organizations_url": "https://api.github.com/users/pierre-godard/orgs", "repos_url": "https://api.github.com/users/pierre-godard/repos", "events_url": "https://api.github.com/users/pierre-godard/events{/privacy}", "received_events_url": "https://api.github.com/users/pierre-godard/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2021-09-15T09:39:20"
"2021-09-15T11:35:24"
"2021-09-15T11:35:24"
CONTRIBUTOR
null
This addresses the issue from #2914 by changing the way fsspec's AbstractFileSystem is accessed.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2915/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2915/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2915", "html_url": "https://github.com/huggingface/datasets/pull/2915", "diff_url": "https://github.com/huggingface/datasets/pull/2915.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2915.patch", "merged_at": "2021-09-15T11:35:24" }
https://api.github.com/repos/huggingface/datasets/issues/2914
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2914/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2914/comments
https://api.github.com/repos/huggingface/datasets/issues/2914/events
https://github.com/huggingface/datasets/issues/2914
996,770,168
I_kwDODunzps47aYF4
2,914
Having a dependency defining fsspec entrypoint raises an AttributeError when importing datasets
{ "login": "pierre-godard", "id": 3969168, "node_id": "MDQ6VXNlcjM5NjkxNjg=", "avatar_url": "https://avatars.githubusercontent.com/u/3969168?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pierre-godard", "html_url": "https://github.com/pierre-godard", "followers_url": "https://api.github.com/users/pierre-godard/followers", "following_url": "https://api.github.com/users/pierre-godard/following{/other_user}", "gists_url": "https://api.github.com/users/pierre-godard/gists{/gist_id}", "starred_url": "https://api.github.com/users/pierre-godard/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pierre-godard/subscriptions", "organizations_url": "https://api.github.com/users/pierre-godard/orgs", "repos_url": "https://api.github.com/users/pierre-godard/repos", "events_url": "https://api.github.com/users/pierre-godard/events{/privacy}", "received_events_url": "https://api.github.com/users/pierre-godard/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
"2021-09-15T07:54:06"
"2021-09-15T16:49:17"
"2021-09-15T16:49:16"
CONTRIBUTOR
null
## Describe the bug In one of my project, I defined a custom fsspec filesystem with an entrypoint. My guess is that by doing so, a variable named `spec` is created in the module `fsspec` (created by entering a for loop as there are entrypoints defined, see the loop in question [here](https://github.com/intake/filesystem_spec/blob/0589358d8a029ed6b60d031018f52be2eb721291/fsspec/__init__.py#L55)). So that `fsspec.spec`, that was previously referring to the `spec` submodule, is now referring to that `spec` variable. This make the import of datasets failing as it is using that `fsspec.spec`. ## Steps to reproduce the bug I could reproduce the bug with a dummy poetry project. Here is the pyproject.toml: ```toml [tool.poetry] name = "debug-datasets" version = "0.1.0" description = "" authors = ["Pierre Godard"] [tool.poetry.dependencies] python = "^3.8" datasets = "^1.11.0" [tool.poetry.dev-dependencies] [build-system] requires = ["poetry-core>=1.0.0"] build-backend = "poetry.core.masonry.api" [tool.poetry.plugins."fsspec.specs"] "file2" = "fsspec.implementations.local.LocalFileSystem" ``` The only other file being a `debug_datasets/__init__.py` empty file. The overall structure of the project is as follows: ``` . ├── pyproject.toml └── debug_datasets └── __init__.py ``` Then, within the project folder run: ``` poetry install poetry run python ``` And in the python interpreter, try to import `datasets`: ``` import datasets ``` ## Expected results The import should run successfully. ## Actual results Here is the trace of the error I get: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/godarpi/.cache/pypoetry/virtualenvs/debug-datasets-JuFzTKL--py3.8/lib/python3.8/site-packages/datasets/__init__.py", line 33, in <module> from .arrow_dataset import Dataset, concatenate_datasets File "/home/godarpi/.cache/pypoetry/virtualenvs/debug-datasets-JuFzTKL--py3.8/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 48, in <module> from .filesystems import extract_path_from_uri, is_remote_filesystem File "/home/godarpi/.cache/pypoetry/virtualenvs/debug-datasets-JuFzTKL--py3.8/lib/python3.8/site-packages/datasets/filesystems/__init__.py", line 30, in <module> def is_remote_filesystem(fs: fsspec.spec.AbstractFileSystem) -> bool: AttributeError: 'EntryPoint' object has no attribute 'AbstractFileSystem' ``` ## Suggested fix `datasets/filesystems/__init__.py`, line 30, replace: ``` def is_remote_filesystem(fs: fsspec.spec.AbstractFileSystem) -> bool: ``` by: ``` def is_remote_filesystem(fs: fsspec.AbstractFileSystem) -> bool: ``` I will come up with a PR soon if this effectively solves the issue. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.11.0 - Platform: WSL2 (Ubuntu 20.04.1 LTS) - Python version: 3.8.5 - PyArrow version: 5.0.0 - `fsspec` version: 2021.8.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2914/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2914/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2913
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2913/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2913/comments
https://api.github.com/repos/huggingface/datasets/issues/2913/events
https://github.com/huggingface/datasets/issues/2913
996,436,368
I_kwDODunzps47ZGmQ
2,913
timit_asr dataset only includes one text phrase
{ "login": "margotwagner", "id": 39107794, "node_id": "MDQ6VXNlcjM5MTA3Nzk0", "avatar_url": "https://avatars.githubusercontent.com/u/39107794?v=4", "gravatar_id": "", "url": "https://api.github.com/users/margotwagner", "html_url": "https://github.com/margotwagner", "followers_url": "https://api.github.com/users/margotwagner/followers", "following_url": "https://api.github.com/users/margotwagner/following{/other_user}", "gists_url": "https://api.github.com/users/margotwagner/gists{/gist_id}", "starred_url": "https://api.github.com/users/margotwagner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/margotwagner/subscriptions", "organizations_url": "https://api.github.com/users/margotwagner/orgs", "repos_url": "https://api.github.com/users/margotwagner/repos", "events_url": "https://api.github.com/users/margotwagner/events{/privacy}", "received_events_url": "https://api.github.com/users/margotwagner/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
"2021-09-14T21:06:07"
"2021-09-15T08:05:19"
"2021-09-15T08:05:18"
NONE
null
## Describe the bug The dataset 'timit_asr' only includes one text phrase. It only includes the transcription "Would such an act of refusal be useful?" multiple times rather than different phrases. ## Steps to reproduce the bug Note: I am following the tutorial https://huggingface.co/blog/fine-tune-wav2vec2-english 1. Install the dataset and other packages ```python !pip install datasets>=1.5.0 !pip install transformers==4.4.0 !pip install soundfile !pip install jiwer ``` 2. Load the dataset ```python from datasets import load_dataset, load_metric timit = load_dataset("timit_asr") ``` 3. Remove columns that we don't want ```python timit = timit.remove_columns(["phonetic_detail", "word_detail", "dialect_region", "id", "sentence_type", "speaker_id"]) ``` 4. Write a short function to display some random samples of the dataset. ```python from datasets import ClassLabel import random import pandas as pd from IPython.display import display, HTML def show_random_elements(dataset, num_examples=10): assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset." picks = [] for _ in range(num_examples): pick = random.randint(0, len(dataset)-1) while pick in picks: pick = random.randint(0, len(dataset)-1) picks.append(pick) df = pd.DataFrame(dataset[picks]) display(HTML(df.to_html())) show_random_elements(timit["train"].remove_columns(["file"])) ``` ## Expected results 10 random different transcription phrases. ## Actual results 10 of the same transcription phrase "Would such an act of refusal be useful?" ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.4.1 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.5 - PyArrow version: not listed
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2913/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2913/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2912
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2912/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2912/comments
https://api.github.com/repos/huggingface/datasets/issues/2912/events
https://github.com/huggingface/datasets/pull/2912
996,256,005
PR_kwDODunzps4rvhgp
2,912
Update link to Blog in docs footer
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2021-09-14T17:23:14"
"2021-09-15T07:59:23"
"2021-09-15T07:59:23"
MEMBER
null
Update link.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2912/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2912/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2912", "html_url": "https://github.com/huggingface/datasets/pull/2912", "diff_url": "https://github.com/huggingface/datasets/pull/2912.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2912.patch", "merged_at": "2021-09-15T07:59:23" }
https://api.github.com/repos/huggingface/datasets/issues/2911
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2911/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2911/comments
https://api.github.com/repos/huggingface/datasets/issues/2911/events
https://github.com/huggingface/datasets/pull/2911
996,202,598
PR_kwDODunzps4rvW7Y
2,911
Fix exception chaining
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2021-09-14T16:19:29"
"2021-09-16T15:04:44"
"2021-09-16T15:04:44"
MEMBER
null
Fix exception chaining to avoid tracebacks with message: `During handling of the above exception, another exception occurred:`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2911/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2911/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2911", "html_url": "https://github.com/huggingface/datasets/pull/2911", "diff_url": "https://github.com/huggingface/datasets/pull/2911.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2911.patch", "merged_at": "2021-09-16T15:04:44" }
https://api.github.com/repos/huggingface/datasets/issues/2910
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2910/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2910/comments
https://api.github.com/repos/huggingface/datasets/issues/2910/events
https://github.com/huggingface/datasets/pull/2910
996,149,632
PR_kwDODunzps4rvL9N
2,910
feat: 🎸 pass additional arguments to get private configs + info
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2021-09-14T15:24:19"
"2021-09-15T16:19:09"
"2021-09-15T16:19:06"
CONTRIBUTOR
null
`use_auth_token` can now be passed to the functions to get the configs or infos of private datasets on the hub
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2910/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2910/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2910", "html_url": "https://github.com/huggingface/datasets/pull/2910", "diff_url": "https://github.com/huggingface/datasets/pull/2910.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2910.patch", "merged_at": null }
https://api.github.com/repos/huggingface/datasets/issues/2909
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2909/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2909/comments
https://api.github.com/repos/huggingface/datasets/issues/2909/events
https://github.com/huggingface/datasets/pull/2909
996,002,180
PR_kwDODunzps4rutdo
2,909
fix anli splits
{ "login": "zaidalyafeai", "id": 15667714, "node_id": "MDQ6VXNlcjE1NjY3NzE0", "avatar_url": "https://avatars.githubusercontent.com/u/15667714?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zaidalyafeai", "html_url": "https://github.com/zaidalyafeai", "followers_url": "https://api.github.com/users/zaidalyafeai/followers", "following_url": "https://api.github.com/users/zaidalyafeai/following{/other_user}", "gists_url": "https://api.github.com/users/zaidalyafeai/gists{/gist_id}", "starred_url": "https://api.github.com/users/zaidalyafeai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zaidalyafeai/subscriptions", "organizations_url": "https://api.github.com/users/zaidalyafeai/orgs", "repos_url": "https://api.github.com/users/zaidalyafeai/repos", "events_url": "https://api.github.com/users/zaidalyafeai/events{/privacy}", "received_events_url": "https://api.github.com/users/zaidalyafeai/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2021-09-14T13:10:35"
"2021-10-13T11:27:49"
"2021-10-13T11:27:49"
CONTRIBUTOR
null
I can't run the tests for dummy data, facing this error `ImportError while loading conftest '/home/zaid/tmp/fix_anli_splits/datasets/tests/conftest.py'. tests/conftest.py:10: in <module> from datasets import config E ImportError: cannot import name 'config' from 'datasets' (unknown location)`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2909/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2909/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2909", "html_url": "https://github.com/huggingface/datasets/pull/2909", "diff_url": "https://github.com/huggingface/datasets/pull/2909.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2909.patch", "merged_at": null }
https://api.github.com/repos/huggingface/datasets/issues/2908
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2908/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2908/comments
https://api.github.com/repos/huggingface/datasets/issues/2908/events
https://github.com/huggingface/datasets/pull/2908
995,970,612
PR_kwDODunzps4rumwW
2,908
Update Zenodo metadata with creator names and affiliation
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2021-09-14T12:39:37"
"2021-09-14T14:29:25"
"2021-09-14T14:29:25"
MEMBER
null
This PR helps in prefilling author data when automatically generating the DOI after each release.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2908/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2908/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2908", "html_url": "https://github.com/huggingface/datasets/pull/2908", "diff_url": "https://github.com/huggingface/datasets/pull/2908.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2908.patch", "merged_at": "2021-09-14T14:29:25" }
https://api.github.com/repos/huggingface/datasets/issues/2907
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2907/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2907/comments
https://api.github.com/repos/huggingface/datasets/issues/2907/events
https://github.com/huggingface/datasets/pull/2907
995,968,152
PR_kwDODunzps4rumOy
2,907
add story_cloze dataset
{ "login": "zaidalyafeai", "id": 15667714, "node_id": "MDQ6VXNlcjE1NjY3NzE0", "avatar_url": "https://avatars.githubusercontent.com/u/15667714?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zaidalyafeai", "html_url": "https://github.com/zaidalyafeai", "followers_url": "https://api.github.com/users/zaidalyafeai/followers", "following_url": "https://api.github.com/users/zaidalyafeai/following{/other_user}", "gists_url": "https://api.github.com/users/zaidalyafeai/gists{/gist_id}", "starred_url": "https://api.github.com/users/zaidalyafeai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zaidalyafeai/subscriptions", "organizations_url": "https://api.github.com/users/zaidalyafeai/orgs", "repos_url": "https://api.github.com/users/zaidalyafeai/repos", "events_url": "https://api.github.com/users/zaidalyafeai/events{/privacy}", "received_events_url": "https://api.github.com/users/zaidalyafeai/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2021-09-14T12:36:53"
"2021-10-08T21:41:42"
"2021-10-08T21:41:41"
CONTRIBUTOR
null
@lhoestq I have spent some time but I still I can't succeed in correctly testing the dummy_data.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2907/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2907/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2907", "html_url": "https://github.com/huggingface/datasets/pull/2907", "diff_url": "https://github.com/huggingface/datasets/pull/2907.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2907.patch", "merged_at": null }
https://api.github.com/repos/huggingface/datasets/issues/2906
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2906/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2906/comments
https://api.github.com/repos/huggingface/datasets/issues/2906/events
https://github.com/huggingface/datasets/pull/2906
995,962,905
PR_kwDODunzps4rulH-
2,906
feat: 🎸 add a function to get a dataset config's split names
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2021-09-14T12:31:22"
"2021-10-04T09:55:38"
"2021-10-04T09:55:37"
CONTRIBUTOR
null
Also: pass additional arguments (use_auth_token) to get private configs + info of private datasets on the hub Questions: - [x] I'm not sure how the versions work: I changed 1.12.1.dev0 to 1.12.1.dev1, was it correct? -> no: reverted - [x] Should I add a section in https://github.com/huggingface/datasets/blob/master/docs/source/load_hub.rst? (there is no section for get_dataset_infos) -> yes: added
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2906/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2906/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2906", "html_url": "https://github.com/huggingface/datasets/pull/2906", "diff_url": "https://github.com/huggingface/datasets/pull/2906.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2906.patch", "merged_at": "2021-10-04T09:55:37" }
https://api.github.com/repos/huggingface/datasets/issues/2905
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2905/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2905/comments
https://api.github.com/repos/huggingface/datasets/issues/2905/events
https://github.com/huggingface/datasets/pull/2905
995,843,964
PR_kwDODunzps4ruL5X
2,905
Update BibTeX entry
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2021-09-14T10:16:17"
"2021-09-14T12:25:37"
"2021-09-14T12:25:37"
MEMBER
null
Update BibTeX entry.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2905/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2905/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2905", "html_url": "https://github.com/huggingface/datasets/pull/2905", "diff_url": "https://github.com/huggingface/datasets/pull/2905.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2905.patch", "merged_at": "2021-09-14T12:25:37" }
https://api.github.com/repos/huggingface/datasets/issues/2904
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2904/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2904/comments
https://api.github.com/repos/huggingface/datasets/issues/2904/events
https://github.com/huggingface/datasets/issues/2904
995,814,222
I_kwDODunzps47WutO
2,904
FORCE_REDOWNLOAD does not work
{ "login": "anoopkatti", "id": 5278299, "node_id": "MDQ6VXNlcjUyNzgyOTk=", "avatar_url": "https://avatars.githubusercontent.com/u/5278299?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anoopkatti", "html_url": "https://github.com/anoopkatti", "followers_url": "https://api.github.com/users/anoopkatti/followers", "following_url": "https://api.github.com/users/anoopkatti/following{/other_user}", "gists_url": "https://api.github.com/users/anoopkatti/gists{/gist_id}", "starred_url": "https://api.github.com/users/anoopkatti/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anoopkatti/subscriptions", "organizations_url": "https://api.github.com/users/anoopkatti/orgs", "repos_url": "https://api.github.com/users/anoopkatti/repos", "events_url": "https://api.github.com/users/anoopkatti/events{/privacy}", "received_events_url": "https://api.github.com/users/anoopkatti/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
3
"2021-09-14T09:45:26"
"2021-10-06T09:37:19"
null
NONE
null
## Describe the bug With GenerateMode.FORCE_REDOWNLOAD, the documentation says +------------------------------------+-----------+---------+ | | Downloads | Dataset | +====================================+===========+=========+ | `REUSE_DATASET_IF_EXISTS` (default)| Reuse | Reuse | +------------------------------------+-----------+---------+ | `REUSE_CACHE_IF_EXISTS` | Reuse | Fresh | +------------------------------------+-----------+---------+ | `FORCE_REDOWNLOAD` | Fresh | Fresh | +------------------------------------+-----------+---------+ However, the old dataset is loaded even when FORCE_REDOWNLOAD is chosen. ## Steps to reproduce the bug ```python import pandas as pd from datasets import load_dataset, GenerateMode pd.DataFrame(range(5), columns=['numbers']).to_csv('/tmp/test.tsv.gz', index=False) ee = load_dataset('csv', data_files=['/tmp/test.tsv.gz'], delimiter='\t', split='train', download_mode=GenerateMode.FORCE_REDOWNLOAD) print(ee) pd.DataFrame(range(10), columns=['numerals']).to_csv('/tmp/test.tsv.gz', index=False) ee = load_dataset('csv', data_files=['/tmp/test.tsv.gz'], delimiter='\t', split='train', download_mode=GenerateMode.FORCE_REDOWNLOAD) print(ee) ``` ## Expected results Dataset({ features: ['numbers'], num_rows: 5 }) Dataset({ features: ['numerals'], num_rows: 10 }) ## Actual results Dataset({ features: ['numbers'], num_rows: 5 }) Dataset({ features: ['numbers'], num_rows: 5 }) ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.8.0 - Platform: Linux-4.14.181-108.257.amzn1.x86_64-x86_64-with-glibc2.10 - Python version: 3.7.10 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2904/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2904/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2903
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2903/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2903/comments
https://api.github.com/repos/huggingface/datasets/issues/2903/events
https://github.com/huggingface/datasets/pull/2903
995,715,191
PR_kwDODunzps4rtxxV
2,903
Fix xpathopen to accept positional arguments
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2021-09-14T08:02:50"
"2021-09-14T08:51:21"
"2021-09-14T08:40:47"
MEMBER
null
Fix `xpathopen()` so that it also accepts positional arguments. Fix #2901.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2903/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2903/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2903", "html_url": "https://github.com/huggingface/datasets/pull/2903", "diff_url": "https://github.com/huggingface/datasets/pull/2903.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2903.patch", "merged_at": "2021-09-14T08:40:47" }
https://api.github.com/repos/huggingface/datasets/issues/2902
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2902/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2902/comments
https://api.github.com/repos/huggingface/datasets/issues/2902/events
https://github.com/huggingface/datasets/issues/2902
995,254,216
MDU6SXNzdWU5OTUyNTQyMTY=
2,902
Add WIT Dataset
{ "login": "nateraw", "id": 32437151, "node_id": "MDQ6VXNlcjMyNDM3MTUx", "avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nateraw", "html_url": "https://github.com/nateraw", "followers_url": "https://api.github.com/users/nateraw/followers", "following_url": "https://api.github.com/users/nateraw/following{/other_user}", "gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}", "starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nateraw/subscriptions", "organizations_url": "https://api.github.com/users/nateraw/orgs", "repos_url": "https://api.github.com/users/nateraw/repos", "events_url": "https://api.github.com/users/nateraw/events{/privacy}", "received_events_url": "https://api.github.com/users/nateraw/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
open
false
null
[]
null
4
"2021-09-13T19:38:49"
"2021-09-27T17:46:55"
null
CONTRIBUTOR
null
## Adding a Dataset - **Name:** *WIT* - **Description:** *Wikipedia-based Image Text Dataset* - **Paper:** *[WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning ](https://arxiv.org/abs/2103.01913)* - **Data:** *https://github.com/google-research-datasets/wit* - **Motivation:** (excerpt from their Github README.md) > - The largest multimodal dataset (publicly available at the time of this writing) by the number of image-text examples. > - A massively multilingual dataset (first of its kind) with coverage for over 100+ languages. > - A collection of diverse set of concepts and real world entities. > - Brings forth challenging real-world test sets. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2902/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2902/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2901
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2901/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2901/comments
https://api.github.com/repos/huggingface/datasets/issues/2901/events
https://github.com/huggingface/datasets/issues/2901
995,232,844
MDU6SXNzdWU5OTUyMzI4NDQ=
2,901
Incompatibility with pytest
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
1
"2021-09-13T19:12:17"
"2021-09-14T08:40:47"
"2021-09-14T08:40:47"
CONTRIBUTOR
null
## Describe the bug pytest complains about xpathopen / path.open("w") ## Steps to reproduce the bug Create a test file, `test.py`: ```python import datasets as ds def load_dataset(): ds.load_dataset("counter", split="train", streaming=True) ``` And launch it with pytest: ```bash python -m pytest test.py ``` ## Expected results It should give something like: ``` collected 1 item test.py . [100%] ======= 1 passed in 3.15s ======= ``` ## Actual results ``` ============================================================================================================================= test session starts ============================================================================================================================== platform linux -- Python 3.8.11, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 rootdir: /home/slesage/hf/datasets-preview-backend, configfile: pyproject.toml plugins: anyio-3.3.1 collected 1 item tests/queries/test_rows.py . [100%]Traceback (most recent call last): File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pytest/__main__.py", line 5, in <module> raise SystemExit(pytest.console_main()) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/config/__init__.py", line 185, in console_main code = main() File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/config/__init__.py", line 162, in main ret: Union[ExitCode, int] = config.hook.pytest_cmdline_main( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_hooks.py", line 265, in __call__ return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_manager.py", line 80, in _hookexec return self._inner_hookexec(hook_name, methods, kwargs, firstresult) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_callers.py", line 60, in _multicall return outcome.get_result() File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_result.py", line 60, in get_result raise ex[1].with_traceback(ex[2]) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_callers.py", line 39, in _multicall res = hook_impl.function(*args) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/main.py", line 316, in pytest_cmdline_main return wrap_session(config, _main) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/main.py", line 304, in wrap_session config.hook.pytest_sessionfinish( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_hooks.py", line 265, in __call__ return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_manager.py", line 80, in _hookexec return self._inner_hookexec(hook_name, methods, kwargs, firstresult) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_callers.py", line 55, in _multicall gen.send(outcome) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/terminal.py", line 803, in pytest_sessionfinish outcome.get_result() File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_result.py", line 60, in get_result raise ex[1].with_traceback(ex[2]) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_callers.py", line 39, in _multicall res = hook_impl.function(*args) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/cacheprovider.py", line 428, in pytest_sessionfinish config.cache.set("cache/nodeids", sorted(self.cached_nodeids)) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/cacheprovider.py", line 188, in set f = path.open("w") TypeError: xpathopen() takes 1 positional argument but 2 were given ``` ## Environment info - `datasets` version: 1.12.0 - Platform: Linux-5.11.0-1017-aws-x86_64-with-glibc2.29 - Python version: 3.8.11 - PyArrow version: 4.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2901/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2901/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/2900
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2900/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2900/comments
https://api.github.com/repos/huggingface/datasets/issues/2900/events
https://github.com/huggingface/datasets/pull/2900
994,922,580
MDExOlB1bGxSZXF1ZXN0NzMyNzczNDkw
2,900
Fix null sequence encoding
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2021-09-13T13:55:08"
"2021-09-13T14:17:43"
"2021-09-13T14:17:42"
MEMBER
null
The Sequence feature encoding was failing when a `None` sequence was used in a dataset. Fix https://github.com/huggingface/datasets/issues/2892
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2900/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2900/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2900", "html_url": "https://github.com/huggingface/datasets/pull/2900", "diff_url": "https://github.com/huggingface/datasets/pull/2900.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2900.patch", "merged_at": "2021-09-13T14:17:42" }