url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.62B
node_id
stringlengths
18
32
number
int64
1
5.62k
title
stringlengths
1
290
user
dict
labels
list
state
stringclasses
1 value
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
sequence
created_at
unknown
updated_at
unknown
closed_at
unknown
author_association
stringclasses
3 values
active_lock_reason
null
body
stringlengths
0
228k
βŒ€
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
state_reason
stringclasses
2 values
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/3222
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3222/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3222/comments
https://api.github.com/repos/huggingface/datasets/issues/3222/events
https://github.com/huggingface/datasets/pull/3222
1,046,299,725
PR_kwDODunzps4uK_uG
3,222
Add docs for audio processing
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
[]
"2021-11-05T23:07:59"
"2021-11-24T16:32:08"
"2021-11-24T15:35:52"
MEMBER
null
This PR adds documentation for the `Audio` feature. It describes: - The difference between loading `path` and `audio`, as well as use-cases/best practices for each of them. - Resampling audio files with `cast_column`, and then calling `ds[0]["audio"]` to automatically decode and resample to the desired sampling rate. - Resampling with `map`. Preview [here](https://52969-250213286-gh.circle-artifacts.com/0/docs/_build/html/audio_process.html), let me know if I'm missing anything!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3222/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3222/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3222", "html_url": "https://github.com/huggingface/datasets/pull/3222", "diff_url": "https://github.com/huggingface/datasets/pull/3222.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3222.patch", "merged_at": "2021-11-24T15:35:52" }
true
https://api.github.com/repos/huggingface/datasets/issues/3221
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3221/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3221/comments
https://api.github.com/repos/huggingface/datasets/issues/3221/events
https://github.com/huggingface/datasets/pull/3221
1,045,890,512
PR_kwDODunzps4uJp4Z
3,221
Resolve data_files by split name
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-11-05T14:07:35"
"2021-11-08T13:52:20"
"2021-11-05T17:49:58"
MEMBER
null
As discussed in https://github.com/huggingface/datasets/issues/3027 we should automatically infer what file is supposed to go to what split automatically, based on filenames. I added the support for different kinds of patterns, for both dataset repositories and local directories: ``` Input structure: my_dataset_repository/ β”œβ”€β”€ README.md └── dataset.csv Output patterns: {"train": ["*"]} ``` ``` Input structure: my_dataset_repository/ β”œβ”€β”€ README.md β”œβ”€β”€ train.csv └── test.csv my_dataset_repository/ β”œβ”€β”€ README.md └── data/ β”œβ”€β”€ train.csv └── test.csv my_dataset_repository/ β”œβ”€β”€ README.md β”œβ”€β”€ train_0.csv β”œβ”€β”€ train_1.csv β”œβ”€β”€ train_2.csv β”œβ”€β”€ train_3.csv β”œβ”€β”€ test_0.csv └── test_1.csv Output patterns: {"train": ["*train*"], "test": ["*test*"]} ``` ``` Input structure: my_dataset_repository/ β”œβ”€β”€ README.md └── data/ β”œβ”€β”€ train/ β”‚ β”œβ”€β”€ shard_0.csv β”‚ β”œβ”€β”€ shard_1.csv β”‚ β”œβ”€β”€ shard_2.csv β”‚ └── shard_3.csv └── test/ β”œβ”€β”€ shard_0.csv └── shard_1.csv Output patterns: {"train": ["*train*/*", "*train*/**/*"], "test": ["*test*/*", "*test*/**/*"]} ``` and also this pattern that allows to have custom split names, and that is the structure used by #3098 for `push_to_hub` (cc @LysandreJik ): ``` Input structure: my_dataset_repository/ β”œβ”€β”€ README.md └── data/ β”œβ”€β”€ train-00000-of-00003.csv β”œβ”€β”€ train-00001-of-00003.csv β”œβ”€β”€ train-00002-of-00003.csv β”œβ”€β”€ test-00000-of-00001.csv β”œβ”€β”€ random-00000-of-00003.csv β”œβ”€β”€ random-00001-of-00003.csv └── random-00002-of-00003.csv Output patterns: { "train": ["data/train-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9].*"], "test": ["data/test-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9].*"], "random": ["data/random-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9].*"], } ``` You can check the documentation about structuring your repository [here](https://52640-250213286-gh.circle-artifacts.com/0/docs/_build/html/repository_structure.html). cc @stevhliu Fix https://github.com/huggingface/datasets/issues/3027 Fix https://github.com/huggingface/datasets/issues/3212 In the future we can also add support for dataset configurations.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3221/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3221/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3221", "html_url": "https://github.com/huggingface/datasets/pull/3221", "diff_url": "https://github.com/huggingface/datasets/pull/3221.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3221.patch", "merged_at": "2021-11-05T17:49:57" }
true
https://api.github.com/repos/huggingface/datasets/issues/3219
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3219/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3219/comments
https://api.github.com/repos/huggingface/datasets/issues/3219/events
https://github.com/huggingface/datasets/issues/3219
1,045,095,000
I_kwDODunzps4-SuJY
3,219
Eventual Invalid Token Error at setup of private datasets
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
"2021-11-04T18:50:45"
"2021-11-08T13:23:06"
"2021-11-08T08:59:43"
MEMBER
null
## Describe the bug From time to time, there appear Invalid Token errors with private datasets: - https://app.circleci.com/pipelines/github/huggingface/datasets/8520/workflows/d44629f2-4749-40f8-a657-50931d0b3434/jobs/52534 ``` ____________ ERROR at setup of test_load_streaming_private_dataset _____________ ValueError: Invalid token passed! ____ ERROR at setup of test_load_streaming_private_dataset_with_zipped_data ____ ValueError: Invalid token passed! =========================== short test summary info ============================ ERROR tests/test_load.py::test_load_streaming_private_dataset - ValueError: I... ERROR tests/test_load.py::test_load_streaming_private_dataset_with_zipped_data ``` - https://app.circleci.com/pipelines/github/huggingface/datasets/8557/workflows/a8383181-ba6d-4487-9d0a-f750b6dcb936/jobs/52763 ``` ____ ERROR at setup of test_load_streaming_private_dataset_with_zipped_data ____ [gw1] linux -- Python 3.6.15 /home/circleci/.pyenv/versions/3.6.15/bin/python3.6 hf_api = <huggingface_hub.hf_api.HfApi object at 0x7f4899bab908> hf_token = 'vgNbyuaLNEBuGbgCEtSBCOcPjZnngJufHkTaZvHwkXKGkHpjBPwmLQuJVXRxBuaRzNlGjlMpYRPbthfHPFWXaaEDTLiqTTecYENxukRYVAAdpeApIUPxcgsowadkTkPj' zip_csv_path = PosixPath('/tmp/pytest-of-circleci/pytest-0/popen-gw1/data16/dataset.csv.zip') @pytest.fixture(scope="session") def hf_private_dataset_repo_zipped_txt_data_(hf_api: HfApi, hf_token, zip_csv_path): repo_name = "repo_zipped_txt_data-{}".format(int(time.time() * 10e3)) hf_api.create_repo(token=hf_token, name=repo_name, repo_type="dataset", private=True) repo_id = f"{USER}/{repo_name}" hf_api.upload_file( token=hf_token, path_or_fileobj=str(zip_csv_path), path_in_repo="data.zip", repo_id=repo_id, > repo_type="dataset", ) tests/hub_fixtures.py:68: ... ValueError: Invalid token passed! =========================== short test summary info ============================ ERROR tests/test_load.py::test_load_streaming_private_dataset_with_zipped_data ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3219/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3219/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3218
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3218/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3218/comments
https://api.github.com/repos/huggingface/datasets/issues/3218/events
https://github.com/huggingface/datasets/pull/3218
1,045,032,313
PR_kwDODunzps4uG2UA
3,218
Fix code quality in riddle_sense dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-11-04T17:43:20"
"2021-11-04T17:50:03"
"2021-11-04T17:50:02"
MEMBER
null
Fix trailing whitespace. Fix #3217.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3218/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3218/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3218", "html_url": "https://github.com/huggingface/datasets/pull/3218", "diff_url": "https://github.com/huggingface/datasets/pull/3218.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3218.patch", "merged_at": "2021-11-04T17:50:02" }
true
https://api.github.com/repos/huggingface/datasets/issues/3217
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3217/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3217/comments
https://api.github.com/repos/huggingface/datasets/issues/3217/events
https://github.com/huggingface/datasets/issues/3217
1,045,029,710
I_kwDODunzps4-SeNO
3,217
Fix code quality bug in riddle_sense dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "To give more context: https://github.com/psf/black/issues/318. `black` doesn't treat this as a bug, but `flake8` does. \r\n" ]
"2021-11-04T17:40:32"
"2021-11-04T17:50:02"
"2021-11-04T17:50:02"
MEMBER
null
## Describe the bug ``` datasets/riddle_sense/riddle_sense.py:36:21: W291 trailing whitespace ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3217/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3217/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3216
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3216/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3216/comments
https://api.github.com/repos/huggingface/datasets/issues/3216/events
https://github.com/huggingface/datasets/pull/3216
1,045,027,733
PR_kwDODunzps4uG1YS
3,216
Pin version exclusion for tensorflow incompatible with keras
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-11-04T17:38:06"
"2021-11-05T10:57:38"
"2021-11-05T10:57:37"
MEMBER
null
Once `tensorflow` version 2.6.2 is released: - https://github.com/tensorflow/tensorflow/commit/c1867f3bfdd1042f694df7a9870be51ba80543cb - https://pypi.org/project/tensorflow/2.6.2/ with the patch: - tensorflow/tensorflow#52927 we can remove the temporary fix we introduced in: - #3208 Fix #3209.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3216/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3216/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3216", "html_url": "https://github.com/huggingface/datasets/pull/3216", "diff_url": "https://github.com/huggingface/datasets/pull/3216.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3216.patch", "merged_at": "2021-11-05T10:57:37" }
true
https://api.github.com/repos/huggingface/datasets/issues/3215
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3215/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3215/comments
https://api.github.com/repos/huggingface/datasets/issues/3215/events
https://github.com/huggingface/datasets/pull/3215
1,045,011,207
PR_kwDODunzps4uGx4o
3,215
Small updates to to_tf_dataset documentation
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-11-04T17:22:01"
"2021-11-04T18:55:38"
"2021-11-04T18:55:37"
MEMBER
null
I added a little more description about `to_tf_dataset` compared to just setting the format
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3215/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3215/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3215", "html_url": "https://github.com/huggingface/datasets/pull/3215", "diff_url": "https://github.com/huggingface/datasets/pull/3215.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3215.patch", "merged_at": "2021-11-04T18:55:37" }
true
https://api.github.com/repos/huggingface/datasets/issues/3213
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3213/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3213/comments
https://api.github.com/repos/huggingface/datasets/issues/3213/events
https://github.com/huggingface/datasets/pull/3213
1,044,745,313
PR_kwDODunzps4uF6W9
3,213
Fix tuple_ie download url
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-11-04T13:09:07"
"2021-11-05T14:16:06"
"2021-11-05T14:16:05"
CONTRIBUTOR
null
Fix #3204
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3213/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3213/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3213", "html_url": "https://github.com/huggingface/datasets/pull/3213", "diff_url": "https://github.com/huggingface/datasets/pull/3213.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3213.patch", "merged_at": "2021-11-05T14:16:05" }
true
https://api.github.com/repos/huggingface/datasets/issues/3212
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3212/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3212/comments
https://api.github.com/repos/huggingface/datasets/issues/3212/events
https://github.com/huggingface/datasets/issues/3212
1,044,640,967
I_kwDODunzps4-Q_TH
3,212
Sort files before loading
{ "login": "lvwerra", "id": 8264887, "node_id": "MDQ6VXNlcjgyNjQ4ODc=", "avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lvwerra", "html_url": "https://github.com/lvwerra", "followers_url": "https://api.github.com/users/lvwerra/followers", "following_url": "https://api.github.com/users/lvwerra/following{/other_user}", "gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}", "starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions", "organizations_url": "https://api.github.com/users/lvwerra/orgs", "repos_url": "https://api.github.com/users/lvwerra/repos", "events_url": "https://api.github.com/users/lvwerra/events{/privacy}", "received_events_url": "https://api.github.com/users/lvwerra/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "This will be fixed by https://github.com/huggingface/datasets/pull/3221" ]
"2021-11-04T11:08:31"
"2021-11-05T17:49:58"
"2021-11-05T17:49:58"
MEMBER
null
When loading a dataset that consists of several files (e.g. `my_data/data_001.json`, `my_data/data_002.json` etc.) they are not loaded in order when using `load_dataset("my_data")`. This could lead to counter-intuitive results if, for example, the data files are sorted by date or similar since they would appear in different order in the `Dataset`. The straightforward solution is to sort the list of files alphabetically before loading them. cc @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3212/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3212/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3211
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3211/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3211/comments
https://api.github.com/repos/huggingface/datasets/issues/3211/events
https://github.com/huggingface/datasets/pull/3211
1,044,617,913
PR_kwDODunzps4uFkBx
3,211
Fix disable_nullable default value to False
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-11-04T10:52:06"
"2021-11-04T11:08:21"
"2021-11-04T11:08:20"
MEMBER
null
Currently the `disable_nullable` parameter is not consistent across all dataset transforms. For example it is `False` in `map` but `True` in `flatten_indices`. This creates unexpected behaviors like this ```python from datasets import Dataset, concatenate_datasets d1 = Dataset.from_dict({"a": [0, 1, 2, 3]}) d2 = d1.filter(lambda x: x["a"] < 2).flatten_indices() d1.data.schema == d2.data.schema # False ``` This can cause issues when concatenating datasets for example. For consistency I set `disable_nullable` to `False` in `flatten_indices` and I fixed some docstrings cc @SBrandeis
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3211/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3211/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3211", "html_url": "https://github.com/huggingface/datasets/pull/3211", "diff_url": "https://github.com/huggingface/datasets/pull/3211.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3211.patch", "merged_at": "2021-11-04T11:08:20" }
true
https://api.github.com/repos/huggingface/datasets/issues/3210
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3210/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3210/comments
https://api.github.com/repos/huggingface/datasets/issues/3210/events
https://github.com/huggingface/datasets/issues/3210
1,044,611,471
I_kwDODunzps4-Q4GP
3,210
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.15.1/datasets/wmt16/wmt16.py
{ "login": "xiuzhilu", "id": 28184983, "node_id": "MDQ6VXNlcjI4MTg0OTgz", "avatar_url": "https://avatars.githubusercontent.com/u/28184983?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xiuzhilu", "html_url": "https://github.com/xiuzhilu", "followers_url": "https://api.github.com/users/xiuzhilu/followers", "following_url": "https://api.github.com/users/xiuzhilu/following{/other_user}", "gists_url": "https://api.github.com/users/xiuzhilu/gists{/gist_id}", "starred_url": "https://api.github.com/users/xiuzhilu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xiuzhilu/subscriptions", "organizations_url": "https://api.github.com/users/xiuzhilu/orgs", "repos_url": "https://api.github.com/users/xiuzhilu/repos", "events_url": "https://api.github.com/users/xiuzhilu/events{/privacy}", "received_events_url": "https://api.github.com/users/xiuzhilu/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
null
[]
null
[ "Hi ! Do you have some kind of proxy in your browser that gives you access to internet ?\r\n\r\nMaybe you're having this error because you don't have access to this URL from python ?", "Hi,do you fixed this error?\r\nI still have this issue when use \"use_auth_token=True\"", "You don't need authentication to access those github hosted files\r\nPlease check that you can access this URL from your browser and also from your terminal" ]
"2021-11-04T10:47:26"
"2022-03-30T08:26:35"
"2022-03-30T08:26:35"
NONE
null
when I use python examples/pytorch/translation/run_translation.py --model_name_or_path examples/pytorch/translation/opus-mt-en-ro --do_train --do_eval --source_lang en --target_lang ro --dataset_name wmt16 --dataset_config_name ro-en --output_dir /tmp/tst-translation --per_device_train_batch_size=4 --per_device_eval_batch_size=4 --overwrite_output_dir --predict_with_generate to finetune translation model on huggingface, I get the issue"ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.15.1/datasets/wmt16/wmt16.py".But I can open the https://raw.githubusercontent.com/huggingface/datasets/1.15.1/datasets/wmt16/wmt16.py by using website. What should I do to solve the issue?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3210/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3210/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3209
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3209/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3209/comments
https://api.github.com/repos/huggingface/datasets/issues/3209/events
https://github.com/huggingface/datasets/issues/3209
1,044,505,771
I_kwDODunzps4-QeSr
3,209
Unpin keras once TF fixes its release
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-11-04T09:15:32"
"2021-11-05T10:57:37"
"2021-11-05T10:57:37"
MEMBER
null
Related to: - #3208
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3209/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3209/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3208
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3208/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3208/comments
https://api.github.com/repos/huggingface/datasets/issues/3208/events
https://github.com/huggingface/datasets/pull/3208
1,044,504,093
PR_kwDODunzps4uFTIs
3,208
Pin keras version until TF fixes its release
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-11-04T09:13:32"
"2021-11-04T09:30:55"
"2021-11-04T09:30:54"
MEMBER
null
Fix #3207.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3208/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3208/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3208", "html_url": "https://github.com/huggingface/datasets/pull/3208", "diff_url": "https://github.com/huggingface/datasets/pull/3208.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3208.patch", "merged_at": "2021-11-04T09:30:54" }
true
https://api.github.com/repos/huggingface/datasets/issues/3207
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3207/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3207/comments
https://api.github.com/repos/huggingface/datasets/issues/3207/events
https://github.com/huggingface/datasets/issues/3207
1,044,496,389
I_kwDODunzps4-QcAF
3,207
CI error: Another metric with the same name already exists in Keras 2.7.0
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
"2021-11-04T09:04:11"
"2021-11-04T09:30:54"
"2021-11-04T09:30:54"
MEMBER
null
## Describe the bug Release of TensorFlow 2.7.0 contains an incompatibility with Keras. See: - keras-team/keras#15579 This breaks our CI test suite: https://app.circleci.com/pipelines/github/huggingface/datasets/8493/workflows/055c7ae2-43bc-49b4-9f11-8fc71f35a25c/jobs/52363
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3207/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3207/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3206
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3206/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3206/comments
https://api.github.com/repos/huggingface/datasets/issues/3206/events
https://github.com/huggingface/datasets/pull/3206
1,044,216,270
PR_kwDODunzps4uEZJe
3,206
[WIP] Allow user-defined hash functions via a registry
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-11-03T23:25:42"
"2021-11-05T12:38:11"
"2021-11-05T12:38:04"
CONTRIBUTOR
null
Inspired by the discussion on hashing in https://github.com/huggingface/datasets/issues/3178#issuecomment-959016329, @lhoestq suggested that it would be neat to allow users more control over the hashing process. Specifically, it would be great if users can specify specific hashing functions depending on the **class** of the object. As an example, we found in the linked topic that loaded spaCy models (`Language` objects) have different hashes when `dump`'d, but their byte representation with `Language.to_bytes()` _is_ deterministic. It would therefore be great if we could specify that for `Language` objects, the hasher should hash the objects `to_bytes()` return value instead of the object itself. This PR adds a new, but tiny, dependency to manage the registry, namely [`catalogue`](https://github.com/explosion/catalogue). Two files have been changed (apart from the added dependency in `setup.py`) and one file has been added. **utils.registry** (added) This file defines our custom Registry and builds a registry called "hashers". A Registry is basically dictionary from names (str) to functions. A function can be added to the registry by a decorator, e.g. ```python @hashers.register(spacy.Language) def hash_spacy_language(nlp): return Hasher.hash(nlp.to_bytes()) ``` You'll notice that `spacy.Language` is not a string, even though the registry holds a str->func mapping. To accomplish this with classes in a dynamic way, catalogue.Registry needed to be subclassed and modified as `DatasetsRegistry`. All methods that use a name as an input are now modified so that classes are deterministically converted in strings in such a way that we can later retrieve the actual class from the string (below). **utils.py_utils** (modified) Added two functions to deal with classes and their qualified names, that is, their full descriptive name including the module. On the one hand it allows us to retrieve a string from a given class, e.g. given `Module` class, return `torch.nn.Module` str. Conversly, a function is added to convert such a full qualified name into a class. For instance, given the string `torch.nn.Module`, return the `Module` class. These straightforward methods allow us to interchangeably use classes and strings without any needed user interaction - they can just register a class, and behind the scenes `DatasetsRegistry` converts these to deterministic strings. **fingerprint** (modified) Updated Hasher.hash so that if the object to hash is an instance of a class in the registry, the registered function is used to hash the object instead of the default behavior. To do so we iterate over the registry `hashers` and convert its keys (strings) into classes, and then we can use `isinstance`. ```python # Check if the current object is an instance that is # applicable to the user-defined hashers. If so, hash # with the user-defined function for full_module_name, func in hashers.get_all().items(): registered_cls = get_cls_from_qualname(full_module_name) if isinstance(value, registered_cls): return func(value) ``` **Putting it all together** To test this, you can try the following example with spaCy. First install spaCy from source and checkout a specific commit. ```shell git clone https://github.com/explosion/spaCy.git cd spaCy/ git checkout cab9209c3dfcd1b75dfe5657f10e52c4d847a3cf cd .. git clone https://github.com/BramVanroy/datasets.git cd datasets git checkout registry pip install -e . pip install ../spaCy spacy download en_core_web_sm ``` Now you can run the following script. By default it will use the custom hasher function for the Language object. You can enable the default behavior by commenting out `@hashers.register...`. ```python import spacy from datasets.fingerprint import Hasher from datasets.utils.registry import hashers # Register a function so that when the Hasher encounters a spacy.Language object # it uses this custom function to hash instead of the default @hashers.register(spacy.Language) def hash_spacy_language(nlp): return Hasher.hash(nlp.to_bytes()) def main(): print(hashers.get_all()) nlp = spacy.load("en_core_web_sm") dump1 = Hasher.hash(nlp) nlp = spacy.load("en_core_web_sm") dump2 = Hasher.hash(nlp) print(dump1) # succeeds when using the registered custom function # fails if using the default assert dump1 == dump2 if __name__ == '__main__': main() ``` To do ==== - The above is just a proof-of-concept. I am open to changes/suggestions - Tests still need to be written - We should consider whether we can make `DatasetsRegistry` very restrictive and ONLY allowing classes. That would make testing easier - otherwise we also need to test for other sorts of objects. - Maybe the `hashers` definition is better suited in `fingerprint`? - Documentation/examples need to be updated - Not sure why the logger is not working in `hash()` - `get_cls_from_qualname` might need a fail-safe: is it possible for a full_qualname to not have a module, and if so how do we deal with that?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3206/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3206/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3206", "html_url": "https://github.com/huggingface/datasets/pull/3206", "diff_url": "https://github.com/huggingface/datasets/pull/3206.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3206.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3205
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3205/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3205/comments
https://api.github.com/repos/huggingface/datasets/issues/3205/events
https://github.com/huggingface/datasets/pull/3205
1,044,099,561
PR_kwDODunzps4uEAlw
3,205
Add Multidoc2dial Dataset
{ "login": "sivasankalpp", "id": 7344617, "node_id": "MDQ6VXNlcjczNDQ2MTc=", "avatar_url": "https://avatars.githubusercontent.com/u/7344617?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sivasankalpp", "html_url": "https://github.com/sivasankalpp", "followers_url": "https://api.github.com/users/sivasankalpp/followers", "following_url": "https://api.github.com/users/sivasankalpp/following{/other_user}", "gists_url": "https://api.github.com/users/sivasankalpp/gists{/gist_id}", "starred_url": "https://api.github.com/users/sivasankalpp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sivasankalpp/subscriptions", "organizations_url": "https://api.github.com/users/sivasankalpp/orgs", "repos_url": "https://api.github.com/users/sivasankalpp/repos", "events_url": "https://api.github.com/users/sivasankalpp/events{/privacy}", "received_events_url": "https://api.github.com/users/sivasankalpp/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-11-03T20:48:31"
"2021-11-24T17:32:49"
"2021-11-24T16:55:08"
CONTRIBUTOR
null
This PR adds the MultiDoc2Dial dataset introduced in this [paper](https://arxiv.org/pdf/2109.12595v1.pdf )
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3205/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3205/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3205", "html_url": "https://github.com/huggingface/datasets/pull/3205", "diff_url": "https://github.com/huggingface/datasets/pull/3205.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3205.patch", "merged_at": "2021-11-24T16:55:08" }
true
https://api.github.com/repos/huggingface/datasets/issues/3204
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3204/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3204/comments
https://api.github.com/repos/huggingface/datasets/issues/3204/events
https://github.com/huggingface/datasets/issues/3204
1,043,707,307
I_kwDODunzps4-NbWr
3,204
FileNotFoundError for TupleIE dataste
{ "login": "arda-vianai", "id": 75334917, "node_id": "MDQ6VXNlcjc1MzM0OTE3", "avatar_url": "https://avatars.githubusercontent.com/u/75334917?v=4", "gravatar_id": "", "url": "https://api.github.com/users/arda-vianai", "html_url": "https://github.com/arda-vianai", "followers_url": "https://api.github.com/users/arda-vianai/followers", "following_url": "https://api.github.com/users/arda-vianai/following{/other_user}", "gists_url": "https://api.github.com/users/arda-vianai/gists{/gist_id}", "starred_url": "https://api.github.com/users/arda-vianai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arda-vianai/subscriptions", "organizations_url": "https://api.github.com/users/arda-vianai/orgs", "repos_url": "https://api.github.com/users/arda-vianai/repos", "events_url": "https://api.github.com/users/arda-vianai/events{/privacy}", "received_events_url": "https://api.github.com/users/arda-vianai/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
[ "@mariosasko @lhoestq Could you give me an update on how to load the dataset after the fix?\r\nThanks.", "Hi @arda-vianai,\r\n\r\nfirst, you can try:\r\n```python\r\nimport datasets\r\ndataset = datasets.load_dataset('tuple_ie', 'all', revision=\"master\")\r\n```\r\nIf this doesn't work, your version of `datasets` is missing some features that are required to run the dataset script, so install the master version with the following command:\r\n```\r\npip install git+https://github.com/huggingface/datasets.git\r\n```\r\nand then:\r\n```python\r\nimport datasets\r\ndataset = datasets.load_dataset('tuple_ie', 'all')\r\n```\r\nshould work (even without `revision`).", "@mariosasko \r\nThanks, it is working now. I actually did that before but I didn't restart the kernel. I restarted it and it works now. My bad!!!\r\nMany thanks and great job!\r\n-arda" ]
"2021-11-03T14:56:55"
"2021-11-05T15:51:15"
"2021-11-05T14:16:05"
NONE
null
Hi, `dataset = datasets.load_dataset('tuple_ie', 'all')` returns a FileNotFound error. Is the data not available? Many thanks.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3204/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3204/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3203
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3203/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3203/comments
https://api.github.com/repos/huggingface/datasets/issues/3203/events
https://github.com/huggingface/datasets/pull/3203
1,043,552,766
PR_kwDODunzps4uCNoT
3,203
Updated: DaNE - updated URL for download
{ "login": "MalteHB", "id": 47593213, "node_id": "MDQ6VXNlcjQ3NTkzMjEz", "avatar_url": "https://avatars.githubusercontent.com/u/47593213?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MalteHB", "html_url": "https://github.com/MalteHB", "followers_url": "https://api.github.com/users/MalteHB/followers", "following_url": "https://api.github.com/users/MalteHB/following{/other_user}", "gists_url": "https://api.github.com/users/MalteHB/gists{/gist_id}", "starred_url": "https://api.github.com/users/MalteHB/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MalteHB/subscriptions", "organizations_url": "https://api.github.com/users/MalteHB/orgs", "repos_url": "https://api.github.com/users/MalteHB/repos", "events_url": "https://api.github.com/users/MalteHB/events{/privacy}", "received_events_url": "https://api.github.com/users/MalteHB/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-11-03T12:55:13"
"2021-11-04T13:14:36"
"2021-11-04T11:46:43"
CONTRIBUTOR
null
It seems that DaNLP has updated their download URLs and it therefore also needs to be updated in here...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3203/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3203/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3203", "html_url": "https://github.com/huggingface/datasets/pull/3203", "diff_url": "https://github.com/huggingface/datasets/pull/3203.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3203.patch", "merged_at": "2021-11-04T11:46:43" }
true
https://api.github.com/repos/huggingface/datasets/issues/3202
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3202/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3202/comments
https://api.github.com/repos/huggingface/datasets/issues/3202/events
https://github.com/huggingface/datasets/issues/3202
1,043,213,660
I_kwDODunzps4-Li1c
3,202
Add mIoU metric
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "Resolved via https://github.com/huggingface/datasets/pull/3745." ]
"2021-11-03T08:42:32"
"2022-06-01T17:39:05"
"2022-06-01T17:39:04"
CONTRIBUTOR
null
**Is your feature request related to a problem? Please describe.** Recently, some semantic segmentation models were added to HuggingFace Transformers, including [SegFormer](https://huggingface.co/transformers/model_doc/segformer.html) and [BEiT](https://huggingface.co/transformers/model_doc/beit.html). Semantic segmentation (which is the task of labeling every pixel of an image with a corresponding class) is typically evaluated using the Mean Intersection and Union (mIoU). Together with the upcoming Image Feature, adding this metric could be very handy when creating example scripts to fine-tune any Transformer-based model on a semantic segmentation dataset. An implementation can be found [here](https://github.com/open-mmlab/mmsegmentation/blob/504965184c3e6bc9ec43af54237129ef21981a5f/mmseg/core/evaluation/metrics.py#L132) for instance.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3202/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3202/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3201
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3201/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3201/comments
https://api.github.com/repos/huggingface/datasets/issues/3201/events
https://github.com/huggingface/datasets/issues/3201
1,043,209,142
I_kwDODunzps4-Lhu2
3,201
Add GSM8K dataset
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[ "Closed via https://github.com/huggingface/datasets/pull/4103" ]
"2021-11-03T08:36:44"
"2022-04-13T11:56:12"
"2022-04-13T11:56:11"
CONTRIBUTOR
null
## Adding a Dataset - **Name:** GSM8K (short for Grade School Math 8k) - **Description:** GSM8K is a dataset of 8.5K high quality linguistically diverse grade school math word problems created by human problem writers. - **Paper:** https://openai.com/blog/grade-school-math/ - **Data:** https://github.com/openai/grade-school-math - **Motivation:** The dataset is useful to investigate the reasoning abilities of large Transformer models, such as GPT-3. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3201/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3201/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3200
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3200/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3200/comments
https://api.github.com/repos/huggingface/datasets/issues/3200/events
https://github.com/huggingface/datasets/pull/3200
1,042,887,291
PR_kwDODunzps4uAZLu
3,200
Catch token invalid error in CI
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-11-02T21:56:26"
"2021-11-03T09:41:08"
"2021-11-03T09:41:08"
MEMBER
null
The staging back end sometimes returns invalid token errors when trying to delete a repo. I modified the fixture in the test that uses staging to ignore this error
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3200/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3200/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3200", "html_url": "https://github.com/huggingface/datasets/pull/3200", "diff_url": "https://github.com/huggingface/datasets/pull/3200.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3200.patch", "merged_at": "2021-11-03T09:41:08" }
true
https://api.github.com/repos/huggingface/datasets/issues/3199
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3199/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3199/comments
https://api.github.com/repos/huggingface/datasets/issues/3199/events
https://github.com/huggingface/datasets/pull/3199
1,042,860,935
PR_kwDODunzps4uAVzQ
3,199
Bump huggingface_hub
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-11-02T21:29:10"
"2021-11-14T01:48:11"
"2021-11-02T21:41:40"
MEMBER
null
huggingface_hub just released its first minor version, so we need to update the dependency It was supposed to be part of 1.15.0 but I'm adding it for 1.15.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3199/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3199/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3199", "html_url": "https://github.com/huggingface/datasets/pull/3199", "diff_url": "https://github.com/huggingface/datasets/pull/3199.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3199.patch", "merged_at": "2021-11-02T21:41:40" }
true
https://api.github.com/repos/huggingface/datasets/issues/3198
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3198/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3198/comments
https://api.github.com/repos/huggingface/datasets/issues/3198/events
https://github.com/huggingface/datasets/pull/3198
1,042,679,548
PR_kwDODunzps4t_5G8
3,198
Add Multi-Lingual LibriSpeech
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-11-02T18:23:59"
"2021-11-04T17:09:22"
"2021-11-04T17:09:22"
MEMBER
null
Add https://www.openslr.org/94/
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3198/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3198/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3198", "html_url": "https://github.com/huggingface/datasets/pull/3198", "diff_url": "https://github.com/huggingface/datasets/pull/3198.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3198.patch", "merged_at": "2021-11-04T17:09:22" }
true
https://api.github.com/repos/huggingface/datasets/issues/3197
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3197/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3197/comments
https://api.github.com/repos/huggingface/datasets/issues/3197/events
https://github.com/huggingface/datasets/pull/3197
1,042,541,127
PR_kwDODunzps4t_cry
3,197
Fix optimized encoding for arrays
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-11-02T15:55:53"
"2021-11-02T19:12:24"
"2021-11-02T19:12:23"
MEMBER
null
Hi ! #3124 introduced a regression that made the benchmarks CI fail because of a bad array comparison when checking the first encoded element. This PR fixes this by making sure that encoding is applied on all sequence types except lists. cc @eladsegal fyi (no big deal)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3197/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3197/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3197", "html_url": "https://github.com/huggingface/datasets/pull/3197", "diff_url": "https://github.com/huggingface/datasets/pull/3197.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3197.patch", "merged_at": "2021-11-02T19:12:23" }
true
https://api.github.com/repos/huggingface/datasets/issues/3196
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3196/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3196/comments
https://api.github.com/repos/huggingface/datasets/issues/3196/events
https://github.com/huggingface/datasets/pull/3196
1,042,223,913
PR_kwDODunzps4t-bxy
3,196
QOL improvements: auto-flatten_indices and desc in map calls
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-11-02T11:28:50"
"2021-11-02T15:41:09"
"2021-11-02T15:41:08"
CONTRIBUTOR
null
This PR: * automatically calls `flatten_indices` where needed: in `unique` and `save_to_disk` to avoid saving the indices file * adds descriptions to the map calls Fix #3040
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3196/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3196/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3196", "html_url": "https://github.com/huggingface/datasets/pull/3196", "diff_url": "https://github.com/huggingface/datasets/pull/3196.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3196.patch", "merged_at": "2021-11-02T15:41:08" }
true
https://api.github.com/repos/huggingface/datasets/issues/3195
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3195/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3195/comments
https://api.github.com/repos/huggingface/datasets/issues/3195/events
https://github.com/huggingface/datasets/pull/3195
1,042,204,044
PR_kwDODunzps4t-ZR0
3,195
More robust `None` handling
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-11-02T11:15:10"
"2021-12-09T14:27:00"
"2021-12-09T14:26:58"
CONTRIBUTOR
null
PyArrow has explicit support for `null` values, so it makes sense to support Nones on our side as well. [Colab Notebook with examples](https://colab.research.google.com/drive/1zcK8BnZYnRe3Ao2271u1T19ag9zLEiy3?usp=sharing) Changes: * allow None for the features types with special encoding (`ClassLabel, TranslationVariableLanguages, Value, _ArrayXD`) * handle None in `class_encode_column` (also there is an option to stringify Nones and treat them as a class) * support None sorting in `sort` (use pandas for that) * handle None in align_labels_with_mapping * support for None in ArrayXD (converts `None` to `np.nan` to align the behavior with PyArrow) * support for None in the Audio/Image feature * allow promotion when concatenating tables (`pa.concat_tables(table_list, promote=True)`) and `null` row/~~column~~ broadcasting similar to pandas Additional notes: * use `null` instead of `none` for function arguments for consistency with existing `disable_nullable` * fixes a bug with the `update_metadata_with_features` call in `Dataset.rename_columns` * had to update some tests, let me know if that's ok TODO: - [x] check how the Audio features behaves with Nones - [x] Better None handling in `concatenate_datasets`/`add_item` - [x] Fix formatting with Nones - [x] Add Colab with examples - [x] Tests TODOs for subsequent PRs: - Mention None handling in the docs - Add `drop_null`/`fill_null` to `Dataset`/`DatasetDict` Fix #3181 #3253
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3195/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3195/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3195", "html_url": "https://github.com/huggingface/datasets/pull/3195", "diff_url": "https://github.com/huggingface/datasets/pull/3195.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3195.patch", "merged_at": "2021-12-09T14:26:57" }
true
https://api.github.com/repos/huggingface/datasets/issues/3194
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3194/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3194/comments
https://api.github.com/repos/huggingface/datasets/issues/3194/events
https://github.com/huggingface/datasets/pull/3194
1,041,999,535
PR_kwDODunzps4t91Eg
3,194
Update link to Datasets Tagging app in Spaces
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-11-02T08:13:50"
"2021-11-08T10:36:23"
"2021-11-08T10:36:22"
MEMBER
null
Fix #3193.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3194/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3194/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3194", "html_url": "https://github.com/huggingface/datasets/pull/3194", "diff_url": "https://github.com/huggingface/datasets/pull/3194.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3194.patch", "merged_at": "2021-11-08T10:36:22" }
true
https://api.github.com/repos/huggingface/datasets/issues/3193
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3193/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3193/comments
https://api.github.com/repos/huggingface/datasets/issues/3193/events
https://github.com/huggingface/datasets/issues/3193
1,041,971,117
I_kwDODunzps4-Gzet
3,193
Update link to datasets-tagging app
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
"2021-11-02T07:39:59"
"2021-11-08T10:36:22"
"2021-11-08T10:36:22"
MEMBER
null
Once datasets-tagging has been transferred to Spaces: - huggingface/datasets-tagging#22 We should update the link in Datasets.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3193/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3193/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3191
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3191/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3191/comments
https://api.github.com/repos/huggingface/datasets/issues/3191/events
https://github.com/huggingface/datasets/issues/3191
1,041,225,111
I_kwDODunzps4-D9WX
3,191
Dataset viewer issue for '*compguesswhat*'
{ "login": "benotti", "id": 2545336, "node_id": "MDQ6VXNlcjI1NDUzMzY=", "avatar_url": "https://avatars.githubusercontent.com/u/2545336?v=4", "gravatar_id": "", "url": "https://api.github.com/users/benotti", "html_url": "https://github.com/benotti", "followers_url": "https://api.github.com/users/benotti/followers", "following_url": "https://api.github.com/users/benotti/following{/other_user}", "gists_url": "https://api.github.com/users/benotti/gists{/gist_id}", "starred_url": "https://api.github.com/users/benotti/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/benotti/subscriptions", "organizations_url": "https://api.github.com/users/benotti/orgs", "repos_url": "https://api.github.com/users/benotti/repos", "events_url": "https://api.github.com/users/benotti/events{/privacy}", "received_events_url": "https://api.github.com/users/benotti/received_events", "type": "User", "site_admin": false }
[ { "id": 3287858981, "node_id": "MDU6TGFiZWwzMjg3ODU4OTgx", "url": "https://api.github.com/repos/huggingface/datasets/labels/streaming", "name": "streaming", "color": "fef2c0", "default": false, "description": "" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "```python\r\n>>> import datasets\r\n>>> dataset = datasets.load_dataset('compguesswhat', name='compguesswhat-original',split='train', streaming=True)\r\n>>> next(iter(dataset))\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 497, in __iter__\r\n for key, example in self._iter():\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 494, in _iter\r\n yield from ex_iterable\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 87, in __iter__\r\n yield from self.generate_examples_fn(**self.kwargs)\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/compguesswhat/4d08b9e0a8d1cf036c9626c93be4a759fdd9fcce050ea503ea14b075e830c799/compguesswhat.py\", line 251, in _generate_examples\r\n with gzip.open(filepath) as in_file:\r\n File \"/home/slesage/.pyenv/versions/3.9.6/lib/python3.9/gzip.py\", line 58, in open\r\n binary_file = GzipFile(filename, gz_mode, compresslevel)\r\n File \"/home/slesage/.pyenv/versions/3.9.6/lib/python3.9/gzip.py\", line 173, in __init__\r\n fileobj = self.myfileobj = builtins.open(filename, mode or 'rb')\r\nFileNotFoundError: [Errno 2] No such file or directory: 'zip://compguesswhat-original/0.2.0/compguesswhat.train.jsonl.gz::https://www.dropbox.com/s/l0nc13udml6vs0w/compguesswhat-original.zip?dl=1'\r\n```\r\n\r\nIt's an issue with the streaming mode. Note that normal mode is used by the dataset viewer when streaming is failing, but only for the smallest datasets. This dataset is above the limit, hence the error.\r\n\r\nSame case as https://github.com/huggingface/datasets/issues/3186#issuecomment-1096549774.", "cc @huggingface/datasets ", "There is an issue with the URLs of their data files: https://www.dropbox.com/s/l0nc13udml6vs0w/compguesswhat-original.zip?dl=1\r\n> Dropbox Error: That didn't work for some reason\r\n\r\nError reported to their repo:\r\n- https://github.com/CompGuessWhat/compguesswhat.github.io/issues/1", "Closed by:\r\n- #4968" ]
"2021-11-01T14:16:49"
"2022-09-12T08:02:29"
"2022-09-12T08:02:29"
NONE
null
## Dataset viewer issue for '*compguesswhat*' **Link:** https://huggingface.co/datasets/compguesswhat File not found Am I the one who added this dataset ? No
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3191/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3191/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3190
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3190/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3190/comments
https://api.github.com/repos/huggingface/datasets/issues/3190/events
https://github.com/huggingface/datasets/issues/3190
1,041,153,631
I_kwDODunzps4-Dr5f
3,190
combination of shuffle and filter results in a bug
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "I cannot reproduce this on master and pyarrow==4.0.1.\r\n", "Hi ! There was a regression in `datasets` 1.12 that introduced this bug. It has been fixed in #3019 in 1.13\r\n\r\nCan you try to update `datasets` and try again ?", "Thanks a lot, fixes with 1.13" ]
"2021-11-01T13:07:29"
"2021-11-02T10:50:49"
"2021-11-02T10:50:49"
CONTRIBUTOR
null
## Describe the bug Hi, I would like to shuffle a dataset, then filter it based on each existing label. however, the combination of `filter`, `shuffle` seems to results in a bug. In the minimal example below, as you see in the filtered results, the filtered labels are not unique, meaning filter has not worked. Any suggestions as a temporary fix is appreciated @lhoestq. Thanks. Best regards Rabeeh ## Steps to reproduce the bug ```python import numpy as np import datasets datasets = datasets.load_dataset('super_glue', 'rte', script_version="master") shuffled_data = datasets["train"].shuffle(seed=42) for label in range(2): print("label ", label) data = shuffled_data.filter(lambda example: int(example['label']) == label) print("length ", len(data), np.unique(data['label'])) ``` ## Expected results Filtering per label, should only return the data with that specific label. ## Actual results As you can see, filtered data per label, has still two labels of [0, 1] ``` label 0 length 1249 [0 1] label 1 length 1241 [0 1] ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.12.1 - Platform: linux - Python version: 3.7.11 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3190/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3190/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3189
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3189/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3189/comments
https://api.github.com/repos/huggingface/datasets/issues/3189/events
https://github.com/huggingface/datasets/issues/3189
1,041,044,986
I_kwDODunzps4-DRX6
3,189
conll2003 incorrect label explanation
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi @BramVanroy,\r\n\r\nsince these fields are of type `ClassLabel` (you can check this with `dset.features`), you can inspect the possible values with:\r\n```python\r\ndset.features[field_name].feature.names # .feature because it's a sequence of labels\r\n```\r\n\r\nand to find the mapping between names and integers, use: \r\n```python\r\ndset.features[field_name].feature.int2str(value_or_values_list) # map integer value to string value\r\n# or\r\ndset.features[field_name].feature.str2int(value_or_values_list) # map string value to integer value\r\n```\r\n\r\n" ]
"2021-11-01T11:03:30"
"2021-11-09T10:40:58"
"2021-11-09T10:40:58"
CONTRIBUTOR
null
In the [conll2003](https://huggingface.co/datasets/conll2003#data-fields) README, the labels are described as follows > - `id`: a `string` feature. > - `tokens`: a `list` of `string` features. > - `pos_tags`: a `list` of classification labels, with possible values including `"` (0), `''` (1), `#` (2), `$` (3), `(` (4). > - `chunk_tags`: a `list` of classification labels, with possible values including `O` (0), `B-ADJP` (1), `I-ADJP` (2), `B-ADVP` (3), `I-ADVP` (4). > - `ner_tags`: a `list` of classification labels, with possible values including `O` (0), `B-PER` (1), `I-PER` (2), `B-ORG` (3), `I-ORG` (4) `B-LOC` (5), `I-LOC` (6) `B-MISC` (7), `I-MISC` (8). First of all, it would be great if we can get a list of ALL possible pos_tags. Second, the chunk tags labels cannot be correct. The description says the values go from 0 to 4 whereas the data shows values from at least 11 to 21 and 0. EDIT: not really a bug, sorry for mistagging.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3189/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3189/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3188
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3188/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3188/comments
https://api.github.com/repos/huggingface/datasets/issues/3188/events
https://github.com/huggingface/datasets/issues/3188
1,040,980,712
I_kwDODunzps4-DBro
3,188
conll2002 issues
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
null
[]
null
[ "Hi ! Thanks for reporting :)\r\n\r\nThis is related to https://github.com/huggingface/datasets/issues/2742, I'm working on it. It should fix the viewer for around 80 datasets.\r\n", "Ah, hadn't seen that sorry.\r\n\r\nThe scrambled \"point of contact\" is a separate issue though, I think.", "@lhoestq The \"point of contact\" is still an issue.", "It will be fixed in https://github.com/huggingface/datasets/pull/3274, thanks" ]
"2021-11-01T09:49:24"
"2021-11-15T13:50:59"
"2021-11-12T17:18:11"
CONTRIBUTOR
null
**Link:** https://huggingface.co/datasets/conll2002 The dataset viewer throws a server error when trying to preview the dataset. ``` Message: Extraction protocol 'train' for file at 'https://raw.githubusercontent.com/teropa/nlp/master/resources/corpora/conll2002/esp.train' is not implemented yet ``` In addition, the "point of contact" has encoding issues and does not work when clicked. Am I the one who added this dataset ? No, @lhoestq did
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3188/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3188/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3187
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3187/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3187/comments
https://api.github.com/repos/huggingface/datasets/issues/3187/events
https://github.com/huggingface/datasets/pull/3187
1,040,412,869
PR_kwDODunzps4t44Ab
3,187
Add ChrF(++) (as implemented in sacrebleu)
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-10-31T08:53:58"
"2021-11-02T14:50:50"
"2021-11-02T14:31:26"
CONTRIBUTOR
null
Similar to my [PR for TER](https://github.com/huggingface/datasets/pull/3153), it feels only right to also include ChrF and friends. These are present in Sacrebleu and are therefore very similar to implement as TER and sacrebleu. I tested the implementation with sacrebleu's tests to verify. You can try this below for yourself ```python import datasets EPSILON = 1e-4 chrf = datasets.load_metric(r"path\to\datasets\metrics\chrf") test_cases = [ (["abcdefg"], ["hijklmnop"], 0.0), (["a"], ["b"], 0.0), ([""], ["b"], 0.0), ([""], ["ref"], 0.0), ([""], ["reference"], 0.0), (["aa"], ["ab"], 8.3333), (["a", "b"], ["a", "c"], 8.3333), (["a"], ["a"], 16.6667), (["a b c"], ["a b c"], 50.0), (["a b c"], ["abc"], 50.0), ([" risk assessment must be made of those who are qualified and expertise in the sector - these are the scientists ."], ["risk assessment has to be undertaken by those who are qualified and expert in that area - that is the scientists ."], 63.361730), ([" Die Beziehung zwischen Obama und Netanjahu ist nicht gerade freundlich. "], ["Das VerhΓ€ltnis zwischen Obama und Netanyahu ist nicht gerade freundschaftlich."], 64.1302698), (["Niemand hat die Absicht, eine Mauer zu errichten"], ["Niemand hat die Absicht, eine Mauer zu errichten"], 100.0), ] for hyp, ref, score in test_cases: # Note the reference transformation which is different from scarebleu's input format results = chrf.compute(predictions=hyp, references=[[r] for r in ref], char_order=6, word_order=0, beta=3, eps_smoothing=True) if abs(score - results["score"]) > EPSILON: print(f"expected {score}, got {results['score']} for {hyp} - {ref}") test_cases_effective_order = [ (["a"], ["a"], 100.0), ([""], ["reference"], 0.0), (["a b c"], ["a b c"], 100.0), (["a b c"], ["abc"], 100.0), ([""], ["c"], 0.0), (["a", "b"], ["a", "c"], 50.0), (["aa"], ["ab"], 25.0), ] for hyp, ref, score in test_cases_effective_order: # Note the reference transformation which is different from scarebleu's input format results = chrf.compute(predictions=hyp, references=[[r] for r in ref], char_order=6, word_order=0, beta=3, eps_smoothing=False) if abs(score - results["score"]) > EPSILON: print(f"expected {score}, got {results['score']} for {hyp} - {ref}") test_cases_keep_whitespace = [ ( ["Die Beziehung zwischen Obama und Netanjahu ist nicht gerade freundlich."], ["Das VerhΓ€ltnis zwischen Obama und Netanyahu ist nicht gerade freundschaftlich."], 67.3481606, ), ( ["risk assessment must be made of those who are qualified and expertise in the sector - these are the scientists ."], ["risk assessment has to be undertaken by those who are qualified and expert in that area - that is the scientists ."], 65.2414427, ), ] for hyp, ref, score in test_cases_keep_whitespace: # Note the reference transformation which is different from scarebleu's input format results = chrf.compute(predictions=hyp, references=[[r] for r in ref], char_order=6, word_order=0, beta=3, whitespace=True) if abs(score - results["score"]) > EPSILON: print(f"expected {score}, got {results['score']} for {hyp} - {ref}") predictions = ["The relationship between Obama and Netanyahu is not exactly friendly."] references = [["The ties between Obama and Netanyahu are not particularly friendly."]] print(chrf.compute(predictions=predictions, references=references)) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3187/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3187/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3187", "html_url": "https://github.com/huggingface/datasets/pull/3187", "diff_url": "https://github.com/huggingface/datasets/pull/3187.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3187.patch", "merged_at": "2021-11-02T14:31:26" }
true
https://api.github.com/repos/huggingface/datasets/issues/3186
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3186/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3186/comments
https://api.github.com/repos/huggingface/datasets/issues/3186/events
https://github.com/huggingface/datasets/issues/3186
1,040,369,397
I_kwDODunzps4-Asb1
3,186
Dataset viewer for nli_tr
{ "login": "e-budur", "id": 2246791, "node_id": "MDQ6VXNlcjIyNDY3OTE=", "avatar_url": "https://avatars.githubusercontent.com/u/2246791?v=4", "gravatar_id": "", "url": "https://api.github.com/users/e-budur", "html_url": "https://github.com/e-budur", "followers_url": "https://api.github.com/users/e-budur/followers", "following_url": "https://api.github.com/users/e-budur/following{/other_user}", "gists_url": "https://api.github.com/users/e-budur/gists{/gist_id}", "starred_url": "https://api.github.com/users/e-budur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/e-budur/subscriptions", "organizations_url": "https://api.github.com/users/e-budur/orgs", "repos_url": "https://api.github.com/users/e-budur/repos", "events_url": "https://api.github.com/users/e-budur/events{/privacy}", "received_events_url": "https://api.github.com/users/e-budur/received_events", "type": "User", "site_admin": false }
[ { "id": 3287858981, "node_id": "MDU6TGFiZWwzMjg3ODU4OTgx", "url": "https://api.github.com/repos/huggingface/datasets/labels/streaming", "name": "streaming", "color": "fef2c0", "default": false, "description": "" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "It's an issue with the streaming mode:\r\n\r\n```python\r\n>>> import datasets\r\n>>> dataset = datasets.load_dataset('nli_tr', name='snli_tr',split='test', streaming=True)\r\n>>> next(iter(dataset))\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 497, in __iter__\r\n for key, example in self._iter():\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 494, in _iter\r\n yield from ex_iterable\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 87, in __iter__\r\n yield from self.generate_examples_fn(**self.kwargs)\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/nli_tr/c2ddd0c0a70caddac6a81c2dae5ca7939f00060d517d08f1983927818dba6521/nli_tr.py\", line 155, in _generate_examples\r\n with codecs.open(filepath, encoding=\"utf-8\") as f:\r\n File \"/home/slesage/.pyenv/versions/3.9.6/lib/python3.9/codecs.py\", line 905, in open\r\n file = builtins.open(filename, mode, buffering)\r\nFileNotFoundError: [Errno 2] No such file or directory: 'zip://snli_tr_1.0_test.jsonl::https://tabilab.cmpe.boun.edu.tr/datasets/nli_datasets/snli_tr_1.0.zip'\r\n```\r\n\r\nNote that normal mode is used by the dataset viewer when streaming is failing, but only for the smallest datasets. `nli_tr` is above the limit, hence the error.", "cc @huggingface/datasets ", "Apparently there is an issue with the data source URLs: Server Not Found\r\n- https://tabilab.cmpe.boun.edu.tr/datasets/nli_datasets/snli_tr_1.0.zip\r\n\r\nWe are contacting the authors to ask them: \r\n@e-budur you are one of the authors: are you aware of the issue with the URLs of your data ?", "Reported to their repo:\r\n- https://github.com/boun-tabi/NLI-TR/issues/9", "The server issue was temporary and is now resolved.", "Once we have implemented support for streaming, the viewer works: https://huggingface.co/datasets/nli_tr" ]
"2021-10-31T03:56:33"
"2022-09-12T09:15:34"
"2022-09-12T08:43:09"
CONTRIBUTOR
null
## Dataset viewer issue for '*nli_tr*' **Link:** https://huggingface.co/datasets/nli_tr Hello, Thank you for the new dataset preview feature that will help the users to view the datasets online. We just noticed that the dataset viewer widget in the `nli_tr` dataset shows the error below. The error must be due to a temporary problem that may have blocked access to the dataset through the dataset viewer. But the dataset is currently accessible through the link in the error message. May we kindly ask if it would be possible to rerun the job so that it can access the dataset for the dataset viewer function? Thank you. Emrah ------------------------------------------ Server Error Status code: 404 Exception: FileNotFoundError Message: [Errno 2] No such file or directory: 'zip://snli_tr_1.0_train.jsonl::https://tabilab.cmpe.boun.edu.tr/datasets/nli_datasets/snli_tr_1.0.zip ------------------------------------------ Am I the one who added this dataset ? Yes
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3186/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3186/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3185
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3185/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3185/comments
https://api.github.com/repos/huggingface/datasets/issues/3185/events
https://github.com/huggingface/datasets/issues/3185
1,040,291,961
I_kwDODunzps4-AZh5
3,185
7z dataset preview not implemented?
{ "login": "Kirili4ik", "id": 30757466, "node_id": "MDQ6VXNlcjMwNzU3NDY2", "avatar_url": "https://avatars.githubusercontent.com/u/30757466?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Kirili4ik", "html_url": "https://github.com/Kirili4ik", "followers_url": "https://api.github.com/users/Kirili4ik/followers", "following_url": "https://api.github.com/users/Kirili4ik/following{/other_user}", "gists_url": "https://api.github.com/users/Kirili4ik/gists{/gist_id}", "starred_url": "https://api.github.com/users/Kirili4ik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Kirili4ik/subscriptions", "organizations_url": "https://api.github.com/users/Kirili4ik/orgs", "repos_url": "https://api.github.com/users/Kirili4ik/repos", "events_url": "https://api.github.com/users/Kirili4ik/events{/privacy}", "received_events_url": "https://api.github.com/users/Kirili4ik/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
null
[]
null
[ "It's a bug in the dataset viewer: the dataset cannot be downloaded in streaming mode, but since the dataset is relatively small, the dataset viewer should have fallback to normal mode. Working on a fix.", "Fixed. https://huggingface.co/datasets/samsum/viewer/samsum/train\r\n\r\n<img width=\"1563\" alt=\"Capture d’écran 2022-04-12 aΜ€ 13 47 45\" src=\"https://user-images.githubusercontent.com/1676121/162953339-cd8922d7-9037-408b-b896-eac1af0bb54f.png\">\r\n\r\nThanks for reporting!" ]
"2021-10-30T20:18:27"
"2022-04-12T11:48:16"
"2022-04-12T11:48:07"
NONE
null
## Dataset viewer issue for dataset 'samsum' **Link:** https://huggingface.co/datasets/samsum Server Error Status code: 400 Exception: NotImplementedError Message: Extraction protocol '7z' for file at 'https://arxiv.org/src/1911.12237v2/anc/corpus.7z' is not implemented yet
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3185/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3185/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3184
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3184/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3184/comments
https://api.github.com/repos/huggingface/datasets/issues/3184/events
https://github.com/huggingface/datasets/pull/3184
1,040,114,102
PR_kwDODunzps4t4J61
3,184
RONEC v2
{ "login": "dumitrescustefan", "id": 22746816, "node_id": "MDQ6VXNlcjIyNzQ2ODE2", "avatar_url": "https://avatars.githubusercontent.com/u/22746816?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dumitrescustefan", "html_url": "https://github.com/dumitrescustefan", "followers_url": "https://api.github.com/users/dumitrescustefan/followers", "following_url": "https://api.github.com/users/dumitrescustefan/following{/other_user}", "gists_url": "https://api.github.com/users/dumitrescustefan/gists{/gist_id}", "starred_url": "https://api.github.com/users/dumitrescustefan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dumitrescustefan/subscriptions", "organizations_url": "https://api.github.com/users/dumitrescustefan/orgs", "repos_url": "https://api.github.com/users/dumitrescustefan/repos", "events_url": "https://api.github.com/users/dumitrescustefan/events{/privacy}", "received_events_url": "https://api.github.com/users/dumitrescustefan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-10-30T10:50:03"
"2021-11-02T16:02:23"
"2021-11-02T16:02:22"
CONTRIBUTOR
null
Hi, as we've recently finished with the new RONEC (Romanian Named Entity Corpus), we'd like to update the dataset here as well. It's actually essential as links to V1 are no longer valid. In reality we'd like to replace completely v1, as v2 is a full re-annotation of v1 with additional data (up to 2x size vs v1). I've run the make style and all the dummy and real data test, and they passed. I hope it's okay to merge the new RONEC v2 in the datasets. Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3184/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3184/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3184", "html_url": "https://github.com/huggingface/datasets/pull/3184", "diff_url": "https://github.com/huggingface/datasets/pull/3184.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3184.patch", "merged_at": "2021-11-02T16:02:22" }
true
https://api.github.com/repos/huggingface/datasets/issues/3183
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3183/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3183/comments
https://api.github.com/repos/huggingface/datasets/issues/3183/events
https://github.com/huggingface/datasets/pull/3183
1,039,761,120
PR_kwDODunzps4t3Dag
3,183
Add missing docstring to DownloadConfig
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-10-29T16:56:35"
"2021-11-02T10:25:38"
"2021-11-02T10:25:37"
CONTRIBUTOR
null
Document the `use_etag` and `num_proc` attributes in `DownloadConig`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3183/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3183/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3183", "html_url": "https://github.com/huggingface/datasets/pull/3183", "diff_url": "https://github.com/huggingface/datasets/pull/3183.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3183.patch", "merged_at": "2021-11-02T10:25:37" }
true
https://api.github.com/repos/huggingface/datasets/issues/3182
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3182/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3182/comments
https://api.github.com/repos/huggingface/datasets/issues/3182/events
https://github.com/huggingface/datasets/pull/3182
1,039,739,606
PR_kwDODunzps4t2-9J
3,182
Don't memoize strings when hashing since two identical strings may have different python ids
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-10-29T16:26:17"
"2021-11-02T09:35:38"
"2021-11-02T09:35:37"
MEMBER
null
When hashing an object that has several times the same string, the hashing could return a different hash if the identical strings share the same python `id()` or not. Here is an example code that shows how the issue can affect the caching: ```python import json import pyarrow as pa from datasets.features import Features from datasets.fingerprint import Hasher schema = pa.schema([pa.field("some_string", pa.string()), pa.field("another_string", pa.string())]) features_from_schema = Features.from_arrow_schema(schema) Hasher.hash(features_from_schema) # dffa9dca9a73fd8c features_dict = json.loads('{"some_string": {"dtype": "string", "id": null, "_type": "Value"}, "another_string": {"dtype": "string", "id": null, "_type": "Value"}}') features_from_json = Features.from_dict(features_dict) Hasher.hash(features_from_json) # 3812e76b15e6420e features_from_schema == features_from_json # True ``` This is because in `features_dict`, some strings like "dtype" are repeated but don't share the same id, contrary to the ones in `features_from_schema`. I fixed that by disabling memoization for strings. This could be optimized in the future by implementing a smarter memoization with a special handling for strings.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3182/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3182/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3182", "html_url": "https://github.com/huggingface/datasets/pull/3182", "diff_url": "https://github.com/huggingface/datasets/pull/3182.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3182.patch", "merged_at": "2021-11-02T09:35:37" }
true
https://api.github.com/repos/huggingface/datasets/issues/3181
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3181/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3181/comments
https://api.github.com/repos/huggingface/datasets/issues/3181/events
https://github.com/huggingface/datasets/issues/3181
1,039,682,097
I_kwDODunzps49-Eox
3,181
`None` converted to `"None"` when loading a dataset
{ "login": "eladsegal", "id": 13485709, "node_id": "MDQ6VXNlcjEzNDg1NzA5", "avatar_url": "https://avatars.githubusercontent.com/u/13485709?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eladsegal", "html_url": "https://github.com/eladsegal", "followers_url": "https://api.github.com/users/eladsegal/followers", "following_url": "https://api.github.com/users/eladsegal/following{/other_user}", "gists_url": "https://api.github.com/users/eladsegal/gists{/gist_id}", "starred_url": "https://api.github.com/users/eladsegal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eladsegal/subscriptions", "organizations_url": "https://api.github.com/users/eladsegal/orgs", "repos_url": "https://api.github.com/users/eladsegal/repos", "events_url": "https://api.github.com/users/eladsegal/events{/privacy}", "received_events_url": "https://api.github.com/users/eladsegal/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @eladsegal, thanks for reporting.\r\n\r\n@mariosasko I saw you are already working on this, but maybe my comment will be useful to you.\r\n\r\nAll values are casted to their corresponding feature type (including `None` values). For example if the feature type is `Value(\"bool\")`, `None` is casted to `False`.\r\n\r\nIt is true that strings were an exception, but this was recently fixed by @lhoestq (see #3158).", "Thanks for reporting.\r\n\r\nThis is actually a breaking change that I think can cause issues when users preprocess their data. String columns used to be nullable. Maybe we can correct https://github.com/huggingface/datasets/pull/3158 to keep the None values and avoid this breaking change ?\r\n\r\nEDIT: the other types (bool, int, etc) can also become nullable IMO", "So what would be the best way to handle a feature that can have a null value in some of the instances? So far I used `None`.\r\nUsing the empty string won't be a good option, as it can be an actual value in the data and is not the same as not having a value at all.", "Hi @eladsegal,\r\n\r\nUse `None`. As @albertvillanova correctly pointed out, this change in conversion was introduced (by mistake) in #3158. To avoid it, install the earlier revision with:\r\n```\r\npip install git+https://github.com/huggingface/datasets.git@8107844ec0e7add005db0585c772ee20adc01a5e\r\n```\r\n\r\nI'm making all the feature types nullable as we speak, and the fix will be merged probably early next week.", "Hi @mariosasko, is there an estimation as to when this issue will be fixed?", "https://github.com/huggingface/datasets/pull/3195 fixed it, we'll do a new release soon :)\r\n\r\nFor now feel free to install `datasets` from the master branch", "Thanks, but unfortunately looks like it isn't fixed yet 😒 \r\n[notebook for 1.14.0](https://colab.research.google.com/drive/1SV3sFXPJMWSQgbm4pr9Y1Q8OJ4JYKcDo?usp=sharing)\r\n[notebook for master](https://colab.research.google.com/drive/145wDpuO74MmsuI0SVLcI1IswG6aHpyhi?usp=sharing)", "Oh, sorry. I deleted the fix by accident when I was resolving a merge conflict. Let me fix this real quick.", "Thank you, it works! 🎊 " ]
"2021-10-29T15:23:53"
"2021-12-11T01:16:40"
"2021-12-09T14:26:57"
CONTRIBUTOR
null
## Describe the bug When loading a dataset `None` values of the type `NoneType` are converted to `'None'` of the type `str`. ## Steps to reproduce the bug ```python from datasets import load_dataset qasper = load_dataset("qasper", split="train", download_mode="reuse_cache_if_exists") print(qasper[60]["full_text"]["section_name"]) ``` When installing version 1.1.40, the output is `[None, 'Introduction', 'Benchmark Datasets', ...]` When installing from the master branch, the output is `['None', 'Introduction', 'Benchmark Datasets', ...]` Notice how the first element was changed from `NoneType` to `str`. ## Expected results `None` should stay as is. ## Actual results `None` is converted to a string. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: master - Platform: Linux-4.4.0-19041-Microsoft-x86_64-with-glibc2.17 - Python version: 3.8.10 - PyArrow version: 4.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3181/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3181/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3180
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3180/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3180/comments
https://api.github.com/repos/huggingface/datasets/issues/3180/events
https://github.com/huggingface/datasets/pull/3180
1,039,641,316
PR_kwDODunzps4t2qQn
3,180
fix label mapping
{ "login": "VictorSanh", "id": 16107619, "node_id": "MDQ6VXNlcjE2MTA3NjE5", "avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/VictorSanh", "html_url": "https://github.com/VictorSanh", "followers_url": "https://api.github.com/users/VictorSanh/followers", "following_url": "https://api.github.com/users/VictorSanh/following{/other_user}", "gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}", "starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions", "organizations_url": "https://api.github.com/users/VictorSanh/orgs", "repos_url": "https://api.github.com/users/VictorSanh/repos", "events_url": "https://api.github.com/users/VictorSanh/events{/privacy}", "received_events_url": "https://api.github.com/users/VictorSanh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-10-29T14:42:24"
"2021-11-02T13:41:07"
"2021-11-02T10:37:12"
MEMBER
null
Fixing label mapping for hlgd. 0 correponds to same event and 1 corresponds to different event <img width="642" alt="Capture d’écran 2021-10-29 aΜ€ 10 39 58 AM" src="https://user-images.githubusercontent.com/16107619/139454810-1f225e3d-ad48-44a8-b8b1-9205c9533839.png"> <img width="638" alt="Capture d’écran 2021-10-29 aΜ€ 10 40 09 AM" src="https://user-images.githubusercontent.com/16107619/139454813-93066a3c-7d33-4f56-b133-2f1a7661e438.png"> nt
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3180/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3180/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3180", "html_url": "https://github.com/huggingface/datasets/pull/3180", "diff_url": "https://github.com/huggingface/datasets/pull/3180.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3180.patch", "merged_at": "2021-11-02T10:37:12" }
true
https://api.github.com/repos/huggingface/datasets/issues/3179
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3179/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3179/comments
https://api.github.com/repos/huggingface/datasets/issues/3179/events
https://github.com/huggingface/datasets/issues/3179
1,039,571,928
I_kwDODunzps499pvY
3,179
Cannot load dataset when the config name is "special"
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
null
[]
null
[ "The issue is that the datasets are malformed. Not a bug with the datasets library" ]
"2021-10-29T13:30:47"
"2021-10-29T13:35:21"
"2021-10-29T13:35:21"
CONTRIBUTOR
null
## Describe the bug After https://github.com/huggingface/datasets/pull/3159, we can get the config name of "Check/region_1", which is "Check___region_1". But now we cannot load the dataset (not sure it's related to the above PR though). It's the case for all the similar datasets, listed in https://github.com/huggingface/datasets-preview-backend/issues/78 ## Steps to reproduce the bug ```python >>> from datasets import get_dataset_config_names >>> get_dataset_config_names("Check/region_1") ['Check___region_1'] >>> load_dataset("Check/region_1") Using custom data configuration Check___region_1-d2b3bc48f11c9be2 Downloading and preparing dataset json/Check___region_1 to /home/slesage/.cache/huggingface/datasets/json/Check___region_1-d2b3bc48f11c9be2/0.0.0/c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426... 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:00<00:00, 4443.12it/s] 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:00<00:00, 1277.19it/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/load.py", line 1632, in load_dataset builder_instance.download_and_prepare( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py", line 607, in download_and_prepare self._download_and_prepare( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py", line 697, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1159, in _prepare_split writer.write_table(table) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 442, in write_table pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 442, in <listcomp> pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema) File "pyarrow/table.pxi", line 1249, in pyarrow.lib.Table.__getitem__ File "pyarrow/table.pxi", line 1825, in pyarrow.lib.Table.column File "pyarrow/table.pxi", line 1800, in pyarrow.lib.Table._ensure_integer_index KeyError: 'Field "builder_name" does not exist in table schema' ``` Loading in streaming mode also returns something strange: ```python >>> list(load_dataset("Check/region_1", streaming=True, split="train")) Using custom data configuration Check___region_1-d2b3bc48f11c9be2 [{'builder_name': None, 'citation': '', 'config_name': None, 'dataset_size': None, 'description': '', 'download_checksums': None, 'download_size': None, 'features': {'speech': {'feature': {'dtype': 'float64', 'id': None, '_type': 'Value'}, 'length': -1, 'id': None, '_type': 'Sequence'}, 'sampling_rate': {'dtype': 'int64', 'id': None, '_type': 'Value'}, 'label': {'dtype': 'string', 'id': None, '_type': 'Value'}}, 'homepage': '', 'license': '', 'post_processed': None, 'post_processing_size': None, 'size_in_bytes': None, 'splits': None, 'supervised_keys': None, 'task_templates': None, 'version': None}, {'_data_files': [{'filename': 'dataset.arrow'}], '_fingerprint': 'f1702bb5533c549c', '_format_columns': ['speech', 'sampling_rate', 'label'], '_format_kwargs': {}, '_format_type': None, '_indexes': {}, '_indices_data_files': None, '_output_all_columns': False, '_split': None}] ``` ## Expected results The dataset should be loaded ## Actual results An error occurs ## Environment info - `datasets` version: 1.14.1.dev0 - Platform: Linux-5.11.0-1020-aws-x86_64-with-glibc2.31 - Python version: 3.9.6 - PyArrow version: 4.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3179/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3179/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3178
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3178/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3178/comments
https://api.github.com/repos/huggingface/datasets/issues/3178/events
https://github.com/huggingface/datasets/issues/3178
1,039,539,076
I_kwDODunzps499huE
3,178
"Property couldn't be hashed properly" even though fully picklable
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "After some digging, I found that this is caused by `dill` and using `recurse=True)` when trying to dump the object. The problem also occurs without multiprocessing. I can only find [the following information](https://dill.readthedocs.io/en/latest/dill.html#dill._dill.dumps) about this:\r\n\r\n> If recurse=True, then objects referred to in the global dictionary are recursively traced and pickled, instead of the default behavior of attempting to store the entire global dictionary. This is needed for functions defined via exec().\r\n\r\nIn the utils, this is explicitly enabled\r\n\r\nhttps://github.com/huggingface/datasets/blob/df63614223bf1dd1feb267d39d741bada613352c/src/datasets/utils/py_utils.py#L327-L330\r\n\r\nIs this really necessary? Is there a way around it? Also pinging the spaCy team in case this is easy to solve on their end. (I hope so.)", "Hi ! Thanks for reporting\r\n\r\nYes `recurse=True` is necessary to be able to hash all the objects that are passed to the `map` function\r\n\r\nEDIT: hopefully this object can be serializable soon, but otherwise we can consider adding more control to the user on how to hash objects that are not serializable (as mentioned in https://github.com/huggingface/datasets/issues/3044#issuecomment-948818210)", "I submitted a PR to spacy that should fix this issue (linked above). I'll leave this open until that PR is merged. ", "@lhoestq After some testing I find that even with the updated spaCy, no cache files are used. I do not get any warnings though, but I can see that map is run every time I run the code. Do you have thoughts about why? If you want to try the tests below, make sure to install spaCy from [here](https://github.com/BramVanroy/spaCy) and installing the base model with `python -m spacy download en_core_web_sm`.\r\n\r\n```python\r\nfrom functools import partial\r\nfrom pathlib import Path\r\n\r\nimport spacy\r\nfrom datasets import Dataset\r\nimport datasets\r\ndatasets.logging.set_verbosity_debug()\r\n\r\ndef tokenize(nlp, l):\r\n return {\"tok\": [t.text for t in nlp(l[\"text\"])]}\r\n\r\ndef main():\r\n fin = r\"some/file/with/many/lines\"\r\n lines = Path(fin).read_text(encoding=\"utf-8\").splitlines()\r\n nlp = spacy.load(\"en_core_web_sm\")\r\n ds = Dataset.from_dict({\"text\": lines, \"text_id\": list(range(len(lines)))})\r\n tok = partial(tokenize, nlp)\r\n ds = ds.map(tok, load_from_cache_file=True)\r\n print(ds[0:2])\r\n\r\nif __name__ == '__main__':\r\n main()\r\n```\r\n\r\n... or with load_dataset (here I get the message that `load_dataset` can reuse the dataset, but still I see all samples being processed via the tqdm progressbar):\r\n\r\n```python\r\nfrom functools import partial\r\n\r\nimport spacy\r\nfrom datasets import load_dataset\r\nimport datasets\r\ndatasets.logging.set_verbosity_debug()\r\n\r\ndef tokenize(nlp, sample):\r\n return {\"tok\": [t.text for t in nlp(sample[\"text\"])]}\r\n\r\ndef main():\r\n fin = r\"some/file/with/many/lines\"\r\n nlp = spacy.load(\"en_core_web_sm\")\r\n tok_func = partial(tokenize, nlp)\r\n ds = load_dataset('text', data_files=fin)\r\n ds = ds[\"train\"].map(tok_func)\r\n print(ds[0:2])\r\n\r\nif __name__ == '__main__':\r\n main()\r\n```", "It looks like every time you load `en_core_web_sm` you get a different python object:\r\n```python\r\nimport spacy\r\nfrom datasets.fingerprint import Hasher\r\n\r\nnlp1 = spacy.load(\"en_core_web_sm\")\r\nnlp2 = spacy.load(\"en_core_web_sm\")\r\nHasher.hash(nlp1), Hasher.hash(nlp2)\r\n# ('f6196a33882fea3b', 'a4c676a071f266ff')\r\n```\r\nHere is a list of attributes that have different hashes for `nlp1` and `nlp2`:\r\n- tagger\r\n- parser\r\n- entity\r\n- pipeline (it's the list of the three attributes above)\r\n\r\nI just took a look at the tagger for example and I found subtle differences (there may be other differences though):\r\n```python\r\nnlp1.tagger.model.tok2vec.embed.id, nlp2.tagger.model.tok2vec.embed.id\r\n# (1721, 2243)\r\n```\r\n\r\nWe can try to find all the differences and find the best way to hash those objects properly", "Thanks for searching! I went looking, and found that this is an implementation detail of thinc\r\n\r\nhttps://github.com/explosion/thinc/blob/68691e303ae68cae4bc803299016f1fc064328bf/thinc/model.py#L96-L98\r\n\r\nPresumably (?) exactly to distinguish between different parts in memory when multiple models are loaded. Do not think that this can be changed on their end - but I will ask what exactly it is for (I'm curious).\r\n\r\nDo you think it is overkill to write something into the hasher explicitly to deal with spaCy models? It seems like something that is beneficial to many, but I do not know if you are open to adding third-party-specific ways to deal with this. If you are, I can have a look for this specific case how we can ignore `thinc.Model.id` from the hasher.", "It can be even simpler to hash the bytes of the pipeline instead\r\n```python\r\nnlp1.to_bytes() == nlp2.to_bytes() # True\r\n```\r\n\r\nIMO we should integrate the custom hashing for spacy models into `datasets` (we use a custom Pickler for that).\r\nWhat could be done on Spacy's side instead (if they think it's nice to have) is to implement a custom pickling for these classes using `to_bytes`/`from_bytes` to have deterministic pickle dumps.\r\n\r\nFinally I think it would be nice in the future to add an API to let `datasets` users control this kind of things. Something like being able to define your own hashing if you use complex objects.\r\n```python\r\n@datasets.register_hash(spacy.language.Language)\r\ndef hash_spacy_language(nlp):\r\n return Hasher.hash(nlp.to_bytes())\r\n```", "I do not quite understand what you mean. as far as I can tell, using `to_bytes` does a pickle dump behind the scene (with `srsly`), recursively using `to_bytes` on the required objects. Therefore, the result of `to_bytes` is a deterministic pickle dump AFAICT. Or do you mean that you wish that using your own pickler and running `dumps(nlp)` should also be deterministic? I guess that would require `__setstate__` and `__getstate__` methods on all the objects that have to/from_bytes. I'll have a listen over at spaCy what they think, and if that would solve the issue. I'll try this locally first, if I find the time.\r\n\r\nI agree that having the option to use a custom hasher would be useful. I like your suggestion!\r\n\r\nEDIT: after trying some things and reading through their API, it seems that they explicitly do not want this. https://spacy.io/usage/saving-loading#pipeline\r\n\r\n> When serializing the pipeline, keep in mind that this will only save out the binary data for the individual components to allow spaCy to restore them – not the entire objects. This is a good thing, because it makes serialization safe. But it also means that you have to take care of storing the config, which contains the pipeline configuration and all the relevant settings.\r\n\r\nBest way forward therefore seems to implement the ability to specify a hasher depending on the objects that are pickled, as you suggested. I can work on this if that is useful. I could use some pointers as to how you would like to implement the `register_hash` functionality though. I assume using `catalogue` over at Explosion might be a good starting point.\r\n\r\n", "Interestingly, my PR does not solve the issue discussed above. The `tokenize` function hash is different on every run, because for some reason `nlp.__call__` has a different hash every time. The issue therefore seems to run much deeper than I thought. If you have any ideas, I'm all ears.\r\n\r\n```shell\r\ngit clone https://github.com/explosion/spaCy.git\r\ncd spaCy/\r\ngit checkout cab9209c3dfcd1b75dfe5657f10e52c4d847a3cf\r\ncd ..\r\n\r\ngit clone https://github.com/BramVanroy/datasets.git\r\ncd datasets\r\ngit checkout registry\r\npip install -e .\r\npip install ../spaCy\r\nspacy download en_core_web_sm\r\n```\r\n\r\n```python\r\nimport spacy\r\n\r\nfrom datasets import load_dataset\r\nfrom datasets.fingerprint import Hasher\r\nfrom datasets.utils.registry import hashers\r\n\r\n@hashers.register(spacy.Language)\r\ndef hash_spacy_language(nlp):\r\n return Hasher.hash(nlp.to_bytes())\r\n\r\ndef main():\r\n fin = r\"your/large/file\"\r\n nlp = spacy.load(\"en_core_web_sm\")\r\n # This is now always the same yay!\r\n print(Hasher.hash(nlp))\r\n\r\n def tokenize(l):\r\n return {\"tok\": [t.text for t in nlp(l[\"text\"])]}\r\n\r\n ds = load_dataset(\"text\", data_files=fin)\r\n # But this is not...\r\n print(Hasher.hash(tokenize))\r\n # ... because of this\r\n print(Hasher.hash(nlp.__call__))\r\n ds = ds[\"train\"].map(tokenize)\r\n print(ds[0:2])\r\n\r\n\r\nif __name__ == '__main__':\r\n main()\r\n```", "Hi ! I just answered in your PR :) In order for your custom hashing to be used for nested objects, you must integrate it into our recursive pickler that we use for hashing.", "I don't quite understand the design constraints of `datasets` or the script that you're running, but my usual advice is to avoid using pickle unless you _absolutely_ have to. So for instance instead of doing your `partial` over the `nlp` object itself, can you just pass the string `en_core_web_sm` in? This will mean calling `spacy.load()` inside the work function, but this is no worse than having to call `pickle.load()` on the contents of the NLP object anyway -- in fact you'll generally find `spacy.load()` faster, apart from the disk read.\r\n\r\nIf you need to pass in the bytes data and don't want to read from disk, you could do something like this:\r\n\r\n```\r\nmsg = (nlp.lang, nlp.to_bytes())\r\n\r\ndef unpack(lang, bytes_data):\r\n return spacy.blank(lang).from_bytes(bytes_data)\r\n```\r\n\r\nI think that should probably work: the Thinc `model.to_dict()` method (which is used by the `model.to_bytes()` method) doesn't pack the model's ID into the message, so the `nlp.to_bytes()` that you get shouldn't be affected by the global IDs. So you should get a clean message from `nlp.to_bytes()` that doesn't depend on the global state.", "Hi Matthew, thanks for chiming in! We are currently implementing exactly what you suggest: `to_bytes()` as a default before pickling - but we may prefer `to_dict` to avoid double dumping.\r\n\r\n`datasets` uses pickle dumps (actually dill) to get unique representations of processing steps (a \"fingerprint\" or hash). So it never needs to re-load that dump - it just needs its value to create a hash. If a fingerprint is identical to a cached fingerprint, then the result can be retrieved from the on-disk cache. (@lhoestq or @mariosasko can correct me if I'm wrong.)\r\n\r\nI was experiencing the issue that parsing with spaCy gave me a different fingerprint on every run of the script and thus it could never load the processed dataset from cache. At first I thought the reason was that spaCy Language objects were not picklable with recursive dill, but even after [adjusting for that](https://github.com/explosion/spaCy/pull/9593) the issue persisted. @lhoestq found that this is due to the changing `id`, which you discussed [here](https://github.com/explosion/spaCy/discussions/9609#discussioncomment-1661081). So yes, you are right. On the surface there simply seems to be an incompatibility between `datasets` default caching functionality as it is currently implemented and `spacy.Language`.\r\n\r\nThe [linked PR](https://github.com/huggingface/datasets/pull/3224) aims to remedy that, though. Up to now I have put some effort into making it easier to define your own \"pickling\" function for a given type (and optionally any of its subclasses). That allows us to tell `datasets` that instead of doing `dill.save(nlp)` (non-deterministic), to use `dill.save(nlp.to_bytes())` (deterministic). When I find some more time, the PR [will be expanded](https://github.com/huggingface/datasets/pull/3224#issuecomment-968958528) to improve the user-experience a bit and add a built-in function to pickle `spacy.Language` as one of the defaults (using `to_bytes()`).", "Is there a workaround for this? maybe by explicitly requesting datasets to cache the result of `.map()`?", "Hi ! If your function is not picklable, then the fingerprint of the resulting dataset can't be computed. The fingerprint is a hash that is used by the cache to reload previously computed datasets: the dataset file is named `cache-<fingerprint>.arrow` in your dataset's cache directory.\r\n\r\nAs a workaround you can set the fingerprint that is going to be used by the cache:\r\n```python\r\nresult = my_dataset.map(func, new_fingerprint=new_fingerprint)\r\n```\r\nAny future call to `map` with the same `new_fingerprint` will reload the result from the cache.\r\n\r\n**Be careful using this though: if you change your `func`, be sure to change the `new_fingerprint` as well.**", "I've been having an issue that might be related to this when trying to pre-tokenize a corpus and caching it for using it later in the pre-training of a RoBERTa model. I always get the following warning:\r\n\r\n```\r\nDataset text downloaded and prepared to /gpfswork/rech/project/user/.cache/hf-datasets/text/default-1850886023af0077/0.0.0/acc32f2f2ef863c93c2f30c52f7df6cc9053a1c2230b8d7da0d210404683ca08. Subsequent calls will reuse this data.\r\nParameter 'function'=<function encode_dataset.<locals>.<lambda> at 0x14a92157b280> of the transform datasets.arrow_dataset.Dataset.filter@2.0.1 couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.\r\n```\r\n\r\nAnd when I launch the pre-training the pre-tokenized corpus is not found and it is tokenized again, which makes me waste precious GPU hours.\r\n\r\nFor me, the workaround was downgrading `dill` and `multiprocess` to the following versions:\r\n\r\n```\r\ndill 0.3.4\r\nmultiprocess 0.70.12.2 \r\n```", "> Hi ! If your function is not picklable, then the fingerprint of the resulting dataset can't be computed. The fingerprint is a hash that is used by the cache to reload previously computed datasets: the dataset file is named `cache-<fingerprint>.arrow` in your dataset's cache directory.\r\n> \r\n> As a workaround you can set the fingerprint that is going to be used by the cache:\r\n> \r\n> ```python\r\n> result = my_dataset.map(func, new_fingerprint=new_fingerprint)\r\n> ```\r\n> \r\n> Any future call to `map` with the same `new_fingerprint` will reload the result from the cache.\r\n> \r\n> **Be careful using this though: if you change your `func`, be sure to change the `new_fingerprint` as well.**\r\n\r\nIs the argument `new_fingerprint` available for datasetDict ? I can only use it on arrow datasets but might be useful to generalize it to DatasetDict's map as well ? @lhoestq ", "> I've been having an issue that might be related to this when trying to pre-tokenize a corpus and caching it for using it later in the pre-training of a RoBERTa model. I always get the following warning:\r\n> \r\n> ```\r\n> Dataset text downloaded and prepared to /gpfswork/rech/project/user/.cache/hf-datasets/text/default-1850886023af0077/0.0.0/acc32f2f2ef863c93c2f30c52f7df6cc9053a1c2230b8d7da0d210404683ca08. Subsequent calls will reuse this data.\r\n> Parameter 'function'=<function encode_dataset.<locals>.<lambda> at 0x14a92157b280> of the transform datasets.arrow_dataset.Dataset.filter@2.0.1 couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.\r\n> ```\r\n> \r\n> And when I launch the pre-training the pre-tokenized corpus is not found and it is tokenized again, which makes me waste precious GPU hours.\r\n> \r\n> For me, the workaround was downgrading `dill` and `multiprocess` to the following versions:\r\n> \r\n> ```\r\n> dill 0.3.4\r\n> multiprocess 0.70.12.2 \r\n> ```\r\n\r\nThis worked for me - thanks!", "I see this has just been closed - it seems quite relevant to another tokenizer I have been trying to use, the `vinai/phobert` family of tokenizers\r\n\r\nhttps://huggingface.co/vinai/phobert-base\r\nhttps://huggingface.co/vinai/phobert-large\r\n\r\nI ran into an issue where a large dataset took several hours to tokenize, the process hung, and I was unable to use the cached version of the tokenized data:\r\n\r\nhttps://discuss.huggingface.co/t/cache-parallelize-long-tokenization-step/25791/3\r\n\r\nI don't see any way to specify the hash of the tokenizer or the fingerprint of the tokenized data to use, so is the tokenized dataset basically lost at this point? Is there a good way to avoid this happening again if I retokenize the data?\r\n", "In your case it looks like the job failed before caching the data - maybe one of the processes crashed", "Interesting. Thanks for the observation. Any suggestions on how to start tracking that down? Perhaps run it singlethreaded and see if it crashes?", "You can monitor your RAM and disk space in case a process dies from OOM or disk full, and when it hangs you can check how many processes are running. IIRC there are other start methods for multiprocessing in python that may show an error message if a process dies.\r\n\r\nRunning on a single process can also help debugging this indeed", "https://github.com/huggingface/datasets/issues/3178#issuecomment-1189435462\r\n\r\nThe solution does not solve for using commonvoice dataset (\"mozilla-foundation/common_voice_11_0\")", "Hi @tung-msol could you open a new issue and share the error you got and the map function you used ?" ]
"2021-10-29T12:56:09"
"2023-01-04T15:33:16"
"2022-11-02T17:18:43"
CONTRIBUTOR
null
## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It should not be an issue either, since spaCy objects are picklable. ## Steps to reproduce the bug Here is a [colab](https://colab.research.google.com/drive/1gt75LCBIzsmBMvvipEOvWulvyZseBiA7?usp=sharing) but for some reason I cannot reproduce it there. That may have to do with logging/tqdm on Colab, or with running things in notebooks. I tried below code on Windows and Ubuntu as a Python script and getting the same issue (warning below). ```python import pickle from datasets import load_dataset import spacy class Processor: def __init__(self): self.nlp = spacy.load("en_core_web_sm", disable=["tagger", "parser", "ner", "lemmatizer"]) @staticmethod def collate(batch): return [d["en"] for d in batch] def parse(self, batch): batch = batch["translation"] return {"translation_tok": [{"en_tok": " ".join([t.text for t in doc])} for doc in self.nlp.pipe(self.collate(batch))]} def process(self): ds = load_dataset("wmt16", "de-en", split="train[:10%]") ds = ds.map(self.parse, batched=True, num_proc=6) if __name__ == '__main__': pr = Processor() # succeeds with open("temp.pkl", "wb") as f: pickle.dump(pr, f) print("Successfully pickled!") pr.process() ``` --- Here is a small change that includes `Hasher.hash` that shows that the hasher cannot seem to successfully pickle parts form the NLP object. ```python from datasets.fingerprint import Hasher import pickle from datasets import load_dataset import spacy class Processor: def __init__(self): self.nlp = spacy.load("en_core_web_sm", disable=["tagger", "parser", "ner", "lemmatizer"]) @staticmethod def collate(batch): return [d["en"] for d in batch] def parse(self, batch): batch = batch["translation"] return {"translation_tok": [{"en_tok": " ".join([t.text for t in doc])} for doc in self.nlp.pipe(self.collate(batch))]} def process(self): ds = load_dataset("wmt16", "de-en", split="train[:10]") return ds.map(self.parse, batched=True) if __name__ == '__main__': pr = Processor() # succeeds with open("temp.pkl", "wb") as f: pickle.dump(pr, f) print("Successfully pickled class instance!") # succeeds with open("temp.pkl", "wb") as f: pickle.dump(pr.nlp, f) print("Successfully pickled nlp!") # fails print(Hasher.hash(pr.nlp)) pr.process() ``` ## Expected results This to be picklable, working (fingerprinted), and no warning. ## Actual results In the first snippet, I get this warning Parameter 'function'=<function Processor.parse at 0x7f44982247a0> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed. In the second, I get this traceback which directs to the `Hasher.hash` line. ``` Traceback (most recent call last): File " \Python\Python36\lib\pickle.py", line 918, in save_global obj2, parent = _getattribute(module, name) File " \Python\Python36\lib\pickle.py", line 266, in _getattribute .format(name, obj)) AttributeError: Can't get local attribute 'add_codes.<locals>.ErrorsWithCodes' on <function add_codes at 0x00000296FF606EA0> During handling of the above exception, another exception occurred: Traceback (most recent call last): File " scratch_4.py", line 40, in <module> print(Hasher.hash(pr.nlp)) File " \lib\site-packages\datasets\fingerprint.py", line 191, in hash return cls.hash_default(value) File " \lib\site-packages\datasets\fingerprint.py", line 184, in hash_default return cls.hash_bytes(dumps(value)) File " \lib\site-packages\datasets\utils\py_utils.py", line 345, in dumps dump(obj, file) File " \lib\site-packages\datasets\utils\py_utils.py", line 320, in dump Pickler(file, recurse=True).dump(obj) File " \lib\site-packages\dill\_dill.py", line 498, in dump StockPickler.dump(self, obj) File " \Python\Python36\lib\pickle.py", line 409, in dump self.save(obj) File " \Python\Python36\lib\pickle.py", line 521, in save self.save_reduce(obj=obj, *rv) File " \Python\Python36\lib\pickle.py", line 634, in save_reduce save(state) File " \Python\Python36\lib\pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File " \lib\site-packages\dill\_dill.py", line 990, in save_module_dict StockPickler.save_dict(pickler, obj) File " \Python\Python36\lib\pickle.py", line 821, in save_dict self._batch_setitems(obj.items()) File " \Python\Python36\lib\pickle.py", line 847, in _batch_setitems save(v) File " \Python\Python36\lib\pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File " \Python\Python36\lib\pickle.py", line 781, in save_list self._batch_appends(obj) File " \Python\Python36\lib\pickle.py", line 805, in _batch_appends save(x) File " \Python\Python36\lib\pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File " \Python\Python36\lib\pickle.py", line 736, in save_tuple save(element) File " \Python\Python36\lib\pickle.py", line 521, in save self.save_reduce(obj=obj, *rv) File " \Python\Python36\lib\pickle.py", line 634, in save_reduce save(state) File " \Python\Python36\lib\pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File " \Python\Python36\lib\pickle.py", line 736, in save_tuple save(element) File " \Python\Python36\lib\pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File " \lib\site-packages\dill\_dill.py", line 990, in save_module_dict StockPickler.save_dict(pickler, obj) File " \Python\Python36\lib\pickle.py", line 821, in save_dict self._batch_setitems(obj.items()) File " \Python\Python36\lib\pickle.py", line 847, in _batch_setitems save(v) File " \Python\Python36\lib\pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File " \lib\site-packages\dill\_dill.py", line 1176, in save_instancemethod0 pickler.save_reduce(MethodType, (obj.__func__, obj.__self__), obj=obj) File " \Python\Python36\lib\pickle.py", line 610, in save_reduce save(args) File " \Python\Python36\lib\pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File " \Python\Python36\lib\pickle.py", line 736, in save_tuple save(element) File " \Python\Python36\lib\pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File " \lib\site-packages\datasets\utils\py_utils.py", line 523, in save_function obj=obj, File " \Python\Python36\lib\pickle.py", line 610, in save_reduce save(args) File " \Python\Python36\lib\pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File " \Python\Python36\lib\pickle.py", line 751, in save_tuple save(element) File " \Python\Python36\lib\pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File " \lib\site-packages\dill\_dill.py", line 990, in save_module_dict StockPickler.save_dict(pickler, obj) File " \Python\Python36\lib\pickle.py", line 821, in save_dict self._batch_setitems(obj.items()) File " \Python\Python36\lib\pickle.py", line 847, in _batch_setitems save(v) File " \Python\Python36\lib\pickle.py", line 521, in save self.save_reduce(obj=obj, *rv) File " \Python\Python36\lib\pickle.py", line 605, in save_reduce save(cls) File " \Python\Python36\lib\pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File " \lib\site-packages\dill\_dill.py", line 1439, in save_type StockPickler.save_global(pickler, obj, name=name) File " \Python\Python36\lib\pickle.py", line 922, in save_global (obj, module_name, name)) _pickle.PicklingError: Can't pickle <class 'spacy.errors.add_codes.<locals>.ErrorsWithCodes'>: it's not found as spacy.errors.add_codes.<locals>.ErrorsWithCodes ``` ## Environment info Tried on both Linux and Windows - `datasets` version: 1.14.0 - Platform: Windows-10-10.0.19041-SP0 + Python 3.7.9; Linux-5.11.0-38-generic-x86_64-with-Ubuntu-20.04-focal + Python 3.7.12 - PyArrow version: 6.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3178/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3178/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3177
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3177/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3177/comments
https://api.github.com/repos/huggingface/datasets/issues/3177/events
https://github.com/huggingface/datasets/issues/3177
1,039,487,780
I_kwDODunzps499VMk
3,177
More control over TQDM when using map/filter with multiple processes
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "Hi,\r\n\r\nIt's hard to provide an API that would cover all use-cases with tqdm in this project.\r\n\r\nHowever, you can make it work by defining a custom decorator (a bit hacky tho) as follows:\r\n```python\r\nimport datasets\r\n\r\ndef progress_only_on_rank_0(func):\r\n def wrapper(*args, **kwargs):\r\n rank = kwargs.get(\"rank\")\r\n disable_tqdm = kwargs.get(\"disable_tqdm\", False)\r\n disable_tqdm = True if rank is not None and rank > 0 else disable_tqdm\r\n kwargs[\"disable_tqdm\"] = disable_tqdm\r\n return func(*args, **kwargs)\r\n return wrapper\r\n \r\ndatasets.Dataset._map_single = progress_only_on_rank_0(datasets.Dataset._map_single)\r\n``` \r\n\r\nEDIT: Ups, closed by accident.\r\n\r\nThanks for the provided links. `Trainer` requires this for training in multi-node distributed setting. However, `Dataset.map` doesn't support that yet.\r\n\r\nDo you have an API for this in mind? `Dataset.map` is already bloated with the arguments, so IMO it's not a good idea to add a new arg there.\r\n\r\n", "Inspiration may be found at `transformers`.\r\n\r\nhttps://github.com/huggingface/transformers/blob/4a394cf53f05e73ab9bbb4b179a40236a5ffe45a/src/transformers/trainer.py#L1231-L1233\r\n\r\nTo get unique IDs for each worker, see https://stackoverflow.com/a/10192611/1150683" ]
"2021-10-29T11:56:16"
"2023-02-13T20:16:40"
"2023-02-13T20:16:40"
CONTRIBUTOR
null
It would help with the clutter in my terminal if tqdm is only shown for rank 0 when using `num_proces>0` in the map and filter methods of datasets. ```python dataset.map(lambda examples: tokenize(examples["text"]), batched=True, num_proc=6) ``` The above snippet leads to a lot of TQDM bars and depending on your terminal, these will not overwrite but keep pushing each other down. ``` #0: 0%| | 0/13 [00:00<?, ?ba/s] #1: 0%| | 0/13 [00:00<?, ?ba/s] #2: 0%| | 0/13 [00:00<?, ?ba/s] #3: 0%| | 0/13 [00:00<?, ?ba/s] #4: 0%| | 0/13 [00:00<?, ?ba/s] #5: 0%| | 0/13 [00:00<?, ?ba/s] #0: 8%| | 1/13 [00:00<?, ?ba/s] #1: 8%| | 1/13 [00:00<?, ?ba/s] ... ``` Instead, it would be welcome if we had the option to only show the progress of rank 0.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3177/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3177/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3176
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3176/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3176/comments
https://api.github.com/repos/huggingface/datasets/issues/3176/events
https://github.com/huggingface/datasets/pull/3176
1,039,068,312
PR_kwDODunzps4t00xS
3,176
OpenSLR dataset: update generate_examples to properly extract data for SLR83
{ "login": "tyrius02", "id": 4561309, "node_id": "MDQ6VXNlcjQ1NjEzMDk=", "avatar_url": "https://avatars.githubusercontent.com/u/4561309?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tyrius02", "html_url": "https://github.com/tyrius02", "followers_url": "https://api.github.com/users/tyrius02/followers", "following_url": "https://api.github.com/users/tyrius02/following{/other_user}", "gists_url": "https://api.github.com/users/tyrius02/gists{/gist_id}", "starred_url": "https://api.github.com/users/tyrius02/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tyrius02/subscriptions", "organizations_url": "https://api.github.com/users/tyrius02/orgs", "repos_url": "https://api.github.com/users/tyrius02/repos", "events_url": "https://api.github.com/users/tyrius02/events{/privacy}", "received_events_url": "https://api.github.com/users/tyrius02/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-10-29T00:59:27"
"2021-11-04T16:20:45"
"2021-10-29T10:04:09"
CONTRIBUTOR
null
Fixed #3168. The SLR38 indices are CSV files and there wasn't any code in openslr.py to process these files properly. The end result was an empty table. I've added code to properly process these CSV files.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3176/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3176/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3176", "html_url": "https://github.com/huggingface/datasets/pull/3176", "diff_url": "https://github.com/huggingface/datasets/pull/3176.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3176.patch", "merged_at": "2021-10-29T10:04:09" }
true
https://api.github.com/repos/huggingface/datasets/issues/3175
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3175/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3175/comments
https://api.github.com/repos/huggingface/datasets/issues/3175/events
https://github.com/huggingface/datasets/pull/3175
1,038,945,271
PR_kwDODunzps4t0bXw
3,175
Add docs for `to_tf_dataset`
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
[]
"2021-10-28T20:55:22"
"2021-11-03T15:39:36"
"2021-11-03T10:07:23"
MEMBER
null
This PR adds some documentation for new features released in v1.13.0, with the main addition being `to_tf_dataset`: - Show how to use `to_tf_dataset` in the tutorial, and move `set_format(type='tensorflow'...)` to the Process section (let me know if I'm missing anything @Rocketknight1 πŸ˜…). - Add an example for loading dataset from multiple zipped CSV files to the Load section. - Add an example for removing columns for an `IterableDataset`. - Add graphic for visualizing streaming.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3175/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3175/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3175", "html_url": "https://github.com/huggingface/datasets/pull/3175", "diff_url": "https://github.com/huggingface/datasets/pull/3175.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3175.patch", "merged_at": "2021-11-03T10:07:23" }
true
https://api.github.com/repos/huggingface/datasets/issues/3174
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3174/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3174/comments
https://api.github.com/repos/huggingface/datasets/issues/3174/events
https://github.com/huggingface/datasets/pull/3174
1,038,427,245
PR_kwDODunzps4tyuQ_
3,174
Asserts replaced by exceptions (huggingface#3171)
{ "login": "joseporiolayats", "id": 5772490, "node_id": "MDQ6VXNlcjU3NzI0OTA=", "avatar_url": "https://avatars.githubusercontent.com/u/5772490?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joseporiolayats", "html_url": "https://github.com/joseporiolayats", "followers_url": "https://api.github.com/users/joseporiolayats/followers", "following_url": "https://api.github.com/users/joseporiolayats/following{/other_user}", "gists_url": "https://api.github.com/users/joseporiolayats/gists{/gist_id}", "starred_url": "https://api.github.com/users/joseporiolayats/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joseporiolayats/subscriptions", "organizations_url": "https://api.github.com/users/joseporiolayats/orgs", "repos_url": "https://api.github.com/users/joseporiolayats/repos", "events_url": "https://api.github.com/users/joseporiolayats/events{/privacy}", "received_events_url": "https://api.github.com/users/joseporiolayats/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-10-28T11:55:45"
"2021-11-06T06:35:32"
"2021-10-29T13:08:43"
CONTRIBUTOR
null
I've replaced two asserts with their proper exceptions following the guidelines described in issue #3171 by following the contributing guidelines. PS: This is one of my first PRs, hoping I don't break anything!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3174/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3174/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3174", "html_url": "https://github.com/huggingface/datasets/pull/3174", "diff_url": "https://github.com/huggingface/datasets/pull/3174.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3174.patch", "merged_at": "2021-10-29T13:08:43" }
true
https://api.github.com/repos/huggingface/datasets/issues/3173
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3173/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3173/comments
https://api.github.com/repos/huggingface/datasets/issues/3173/events
https://github.com/huggingface/datasets/pull/3173
1,038,404,300
PR_kwDODunzps4typcA
3,173
Fix issue with filelock filename being too long on encrypted filesystems
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-10-28T11:28:57"
"2021-10-29T09:42:24"
"2021-10-29T09:42:24"
CONTRIBUTOR
null
Infer max filename length in filelock on Unix-like systems. Should fix problems on encrypted filesystems such as eCryptfs. Fix #2924 cc: @lmmx
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3173/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3173/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3173", "html_url": "https://github.com/huggingface/datasets/pull/3173", "diff_url": "https://github.com/huggingface/datasets/pull/3173.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3173.patch", "merged_at": "2021-10-29T09:42:24" }
true
https://api.github.com/repos/huggingface/datasets/issues/3172
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3172/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3172/comments
https://api.github.com/repos/huggingface/datasets/issues/3172/events
https://github.com/huggingface/datasets/issues/3172
1,038,351,587
I_kwDODunzps494_zj
3,172
`SystemError 15` thrown in `Dataset.__del__` when using `Dataset.map()` with `num_proc>1`
{ "login": "vlievin", "id": 9859840, "node_id": "MDQ6VXNlcjk4NTk4NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/9859840?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vlievin", "html_url": "https://github.com/vlievin", "followers_url": "https://api.github.com/users/vlievin/followers", "following_url": "https://api.github.com/users/vlievin/following{/other_user}", "gists_url": "https://api.github.com/users/vlievin/gists{/gist_id}", "starred_url": "https://api.github.com/users/vlievin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vlievin/subscriptions", "organizations_url": "https://api.github.com/users/vlievin/orgs", "repos_url": "https://api.github.com/users/vlievin/repos", "events_url": "https://api.github.com/users/vlievin/events{/privacy}", "received_events_url": "https://api.github.com/users/vlievin/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "NB: even if the error is raised, the dataset is successfully cached. So restarting the script after every `map()` allows to ultimately run the whole preprocessing. But this prevents to realistically run the code over multiple nodes.", "Hi,\r\n\r\nIt's not easy to debug the problem without the script. I may be wrong since I'm not very familiar with PyTorch Lightning, but shouldn't you preprocess the data in the `prepare_data` function of `LightningDataModule` and not in the `setup` function.\r\nAs you can't modify the module state in `prepare_data` (according to the docs), use the `cache_file_name` argument in `Dataset.map` there, and reload the processed data in `setup` with `Dataset.from_file(cache_file_name)`. If `num_proc>1`, check the docs on the `suffix_template` argument of `Dataset.map` to get an idea what the final `cache_file_names` are going to be.\r\n\r\nLet me know if this helps.", "Hi @mariosasko, thank you for the hint, that helped me to move forward with that issue. \r\n\r\nI did a major refactoring of my project to disentangle my `LightningDataModule` and `Dataset`. Just FYI, it looks like:\r\n\r\n```python\r\nclass Builder():\r\n def __call__() -> DatasetDict:\r\n # load and preprocess the data\r\n return dataset\r\n\r\nclass DataModule(LightningDataModule):\r\n def prepare_data():\r\n self.builder()\r\n def setup():\r\n self.dataset = self.builder()\r\n```\r\n\r\nUnfortunately, the entanglement between `LightningDataModule` and `Dataset` was not the issue.\r\n\r\nThe culprit was `hydra` and a slight adjustment of the structure of my project solved this issue. The problematic project structure was:\r\n\r\n```\r\nsrc/\r\n | - cli.py\r\n | - training/\r\n | -experiment.py\r\n\r\n# code in experiment.py\r\ndef run_experiment(config):\r\n # preprocess data and run\r\n \r\n# code in cli.py\r\n@hydra.main(...)\r\ndef run(config):\r\n return run_experiment(config)\r\n```\r\n\r\nMoving `run()` from `clip.py` to `training.experiment.py` solved the issue with `SystemError 15`. No idea why. \r\n\r\nEven if the traceback was referring to `Dataset.__del__`, the problem does not seem to be primarily related to `datasets`, so I will close this issue. Thank you for your help!", "Please allow me to revive this discussion, as I have an extremely similar issue. Instead of an error, my datasets functions simply aren't caching properly. My setup is almost the same as yours, with hydra to configure my experiment parameters.\r\n\r\n@vlievin Could you confirm if your code correctly loads the cache? If so, do you have any public code that I can reference for comparison?\r\n\r\nI will post a full example with hydra that illustrates this problem in a little bit, probably on another thread.", "Hello @mariomeissner, very sorry for the late reply, I hope you have found a solution to your problem!\r\n\r\nI don't have public code at the moment. I have not experienced any other issue with hydra, even if I don't understand why changing the location of the definition of `run()` fixed the problem. \r\n\r\nOverall, I don't have issue with caching anymore, even when \r\n1. using custom fingerprints using the argument `new_fingerprint \r\n2. when using `num_proc>1`", "I solved my issue by turning the map callable into a class static method, like they do in `lightning-transformers`. Very strange...", "I have this issue with datasets v2.5.2 with Python 3.8.10 on Ubuntu 20.04.4 LTS. It does not occur when num_proc=1. When num_proc>1, it intermittently occurs and will cause process to hang. As previously mentioned, it occurs even when datasets have been previously cached. I have tried wrapping logic in a static class as suggested with @mariomeissner with no improvement.", "@philipchung hello ,i have the same issue like yours,did you solve it?", "No. I was not able to get num_proc>1 to work." ]
"2021-10-28T10:29:00"
"2023-01-26T07:07:54"
"2021-11-03T11:26:10"
NONE
null
## Describe the bug I use `datasets.map` to preprocess some data in my application. The error `SystemError 15` is thrown at the end of the execution of `Dataset.map()` (only with `num_proc>1`. Traceback included bellow. The exception is raised only when the code runs within a specific context. Despite ~10h spent investigating this issue, I have failed to isolate the bug, so let me describe my setup. In my project, `Dataset` is wrapped into a `LightningDataModule` and the data is preprocessed when calling `LightningDataModule.setup()`. Calling `.setup()` in an isolated script works fine (even when wrapped with `hydra.main()`). However, when calling `.setup()` within the experiment script (depends on `pytorch_lightning`), the script crashes and `SystemError 15`. I could avoid throwing this error by modifying ` Dataset.__del__()` (see bellow), but I believe this only moves the problem somewhere else. I am completely stuck with this issue, any hint would be welcome. ```python class Dataset() ... def __del__(self): if hasattr(self, "_data"): _ = self._data # <- ugly trick that allows avoiding the issue. del self._data if hasattr(self, "_indices"): del self._indices ``` ## Steps to reproduce the bug ```python # Unfortunately I couldn't isolate the bug. ``` ## Expected results Calling `Dataset.map()` without throwing an exception. Or at least raising a more detailed exception/traceback. ## Actual results ``` Exception ignored in: <function Dataset.__del__ at 0x7f7cec179160>β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 5/5 [00:05<00:00, 1.17ba/s] Traceback (most recent call last): File ".../python3.8/site-packages/datasets/arrow_dataset.py", line 906, in __del__ del self._data File ".../python3.8/site-packages/ray/worker.py", line 1033, in sigterm_handler sys.exit(signum) SystemExit: 15 ``` ## Environment info Tested on 2 environments: **Environment 1.** - `datasets` version: 1.14.0 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.8 - PyArrow version: 6.0.0 **Environment 2.** - `datasets` version: 1.14.0 - Platform: Linux-4.18.0-305.19.1.el8_4.x86_64-x86_64-with-glibc2.28 - Python version: 3.9.7 - PyArrow version: 6.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3172/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3172/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3171
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3171/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3171/comments
https://api.github.com/repos/huggingface/datasets/issues/3171/events
https://github.com/huggingface/datasets/issues/3171
1,037,728,059
I_kwDODunzps492nk7
3,171
Raise exceptions instead of using assertions for control flow
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
closed
false
null
[]
null
[ "Adding the remaining tasks for this issue to help new code contributors. \r\n$ cd src/datasets && ack assert -lc \r\n- [x] commands/convert.py:1\r\n- [x] arrow_reader.py:3\r\n- [x] load.py:7\r\n- [x] utils/py_utils.py:2\r\n- [x] features/features.py:9\r\n- [x] arrow_writer.py:7\r\n- [x] search.py:6\r\n- [x] table.py:1\r\n- [x] metric.py:3\r\n- [x] tasks/image_classification.py:1\r\n- [x] arrow_dataset.py:17\r\n- [x] fingerprint.py:6\r\n- [x] io/json.py:1\r\n- [x] io/csv.py:1", "Hi all,\r\nI am interested in taking up `fingerprint.py`, `search.py`, `arrow_writer.py` and `metric.py`. Will raise a PR soon!", "Let me look into `arrow_dataset.py`, `table.py`, `data_files.py` & `features.py` ", "All the tasks are completed for this issue. This can be closed. " ]
"2021-10-27T18:26:52"
"2021-12-23T16:40:37"
"2021-12-23T16:40:37"
CONTRIBUTOR
null
Motivated by https://github.com/huggingface/transformers/issues/12789 in Transformers, one welcoming change would be replacing assertions with proper exceptions. The only type of assertions we should keep are those used as sanity checks. Currently, there is a total of 87 files with the `assert` statements (located under `datasets` and `src/datasets`), so when working on this, to manage the PR size, only modify 4-5 files at most before submitting a PR.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3171/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3171/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3170
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3170/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3170/comments
https://api.github.com/repos/huggingface/datasets/issues/3170/events
https://github.com/huggingface/datasets/pull/3170
1,037,601,926
PR_kwDODunzps4twDUo
3,170
Preserve ordering in `zip_dict`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-10-27T16:07:30"
"2021-10-29T13:09:37"
"2021-10-29T13:09:37"
CONTRIBUTOR
null
Replace `set` with the `unique_values` generator in `zip_dict`. This PR fixes the problem with the different ordering of the example keys across different Python sessions caused by the `zip_dict` call in `Features.decode_example`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3170/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3170/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3170", "html_url": "https://github.com/huggingface/datasets/pull/3170", "diff_url": "https://github.com/huggingface/datasets/pull/3170.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3170.patch", "merged_at": "2021-10-29T13:09:37" }
true
https://api.github.com/repos/huggingface/datasets/issues/3169
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3169/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3169/comments
https://api.github.com/repos/huggingface/datasets/issues/3169/events
https://github.com/huggingface/datasets/pull/3169
1,036,773,357
PR_kwDODunzps4ttYmZ
3,169
Configurable max filename length in file locks
{ "login": "lmmx", "id": 2979452, "node_id": "MDQ6VXNlcjI5Nzk0NTI=", "avatar_url": "https://avatars.githubusercontent.com/u/2979452?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lmmx", "html_url": "https://github.com/lmmx", "followers_url": "https://api.github.com/users/lmmx/followers", "following_url": "https://api.github.com/users/lmmx/following{/other_user}", "gists_url": "https://api.github.com/users/lmmx/gists{/gist_id}", "starred_url": "https://api.github.com/users/lmmx/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lmmx/subscriptions", "organizations_url": "https://api.github.com/users/lmmx/orgs", "repos_url": "https://api.github.com/users/lmmx/repos", "events_url": "https://api.github.com/users/lmmx/events{/privacy}", "received_events_url": "https://api.github.com/users/lmmx/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-10-26T21:52:55"
"2021-10-28T16:14:14"
"2021-10-28T16:14:13"
NONE
null
Resolve #2924 (https://github.com/huggingface/datasets/issues/2924#issuecomment-952330956) wherein the assumption of file lock maximum filename length to be 255 raises an OSError on encrypted drives (ecryptFS on Linux uses part of the lower filename, reducing the maximum filename size to 143). Allowing this limit to be set in the config module allows this to be modified by users. Will not affect Windows users, as their class passes 255 on init explicitly. Reproduced with the following example ([the first few lines of a script from Lightning Flash](https://lightning-flash.readthedocs.io/en/latest/reference/speech_recognition.html), fine-tuning a HF model): ```py import torch import flash from flash.audio import SpeechRecognition, SpeechRecognitionData from flash.core.data.utils import download_data # 1. Create the DataModule download_data("https://pl-flash-data.s3.amazonaws.com/timit_data.zip", "./data") datamodule = SpeechRecognitionData.from_json( input_fields="file", target_fields="text", train_file="data/timit/train.json", test_file="data/timit/test.json", ) ``` Which gave this traceback: ```py Traceback (most recent call last): File "lf_ft.py", line 10, in <module> datamodule = SpeechRecognitionData.from_json( File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_module.py", line 1005, in from_json return cls.from_data_source( File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_module.py", line 571, in from_data_source train_dataset, val_dataset, test_dataset, predict_dataset = data_source.to_datasets( File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_source.py", line 307, in to_datasets train_dataset = self.generate_dataset(train_data, RunningStage.TRAINING) File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_source.py", line 344, in generate_dataset data = load_data(data, mock_dataset) File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/audio/speech_recognition/data.py", line 103, in load_data dataset_dict = load_dataset(self.filetype, data_files={stage: str(file)}) File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/load.py", line 1599, in load_dataset builder_instance = load_dataset_builder( File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/load.py", line 1457, in load_dataset_builder builder_instance: DatasetBuilder = builder_cls( File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/builder.py", line 285, in __init__ with FileLock(lock_path): File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/utils/filelock.py", line 323, in __enter__ self.acquire() File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/utils/filelock.py", line 272, in acquire self._acquire() File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/utils/filelock.py", line 403, in _acquire fd = os.open(self._lock_file, open_mode) OSError: [Errno 36] File name too long: '/home/louis/.cache/huggingface/datasets/_home_louis_.cache_huggingface_datasets_json_default-98e6813a547f72fa_0.0.0_c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426.lock' ``` Note the filename is 145 chars long: ``` >>> len("_home_louis_.cache_huggingface_datasets_json_default-98e6813a547f72fa_0.0.0_c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426.lock") 145 ``` After installing datasets as an editable local package and modifying the script I was running to first include: ```py import datasets datasets.config.MAX_DATASET_CONFIG_ID_READABLE_LENGTH = 143 ``` The error goes away. If I instead deliberately set the value incorrectly as 144, the OSError returns: ``` Traceback (most recent call last): File "lf_ft.py", line 14, in <module> datamodule = SpeechRecognitionData.from_json( File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_module.py", line 1005, in from_json return cls.from_data_source( File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_module.py", line 571, in from_data_source train_dataset, val_dataset, test_dataset, predict_dataset = data_source.to_datasets( File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_source.py", line 307, in to_datasets train_dataset = self.generate_dataset(train_data, RunningStage.TRAINING) File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_source.py", line 344, in generate_dataset data = load_data(data, mock_dataset) File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/audio/speech_recognition/data.py", line 103, in load_data dataset_dict = load_dataset(self.filetype, data_files={stage: str(file)}) File "/home/louis/dev/hf_datasets/src/datasets/load.py", line 1605, in load_dataset builder_instance = load_dataset_builder( File "/home/louis/dev/hf_datasets/src/datasets/load.py", line 1463, in load_dataset_builder builder_instance: DatasetBuilder = builder_cls( File "/home/louis/dev/hf_datasets/src/datasets/builder.py", line 285, in __init__ with FileLock(lock_path): File "/home/louis/dev/hf_datasets/src/datasets/utils/filelock.py", line 326, in __enter__ self.acquire() File "/home/louis/dev/hf_datasets/src/datasets/utils/filelock.py", line 275, in acquire self._acquire() File "/home/louis/dev/hf_datasets/src/datasets/utils/filelock.py", line 406, in _acquire fd = os.open(self._lock_file, open_mode) OSError: [Errno 36] File name too long: '/home/louis/.cache/huggingface/datasets/_home_louis_.cache_huggingface_datasets_json_default-32c812b5c1272d64_0.0.0_c2d554c3377ea79c7664b93dc65d0803b45e3279...-5794079643713042223.lock' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3169/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3169/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3169", "html_url": "https://github.com/huggingface/datasets/pull/3169", "diff_url": "https://github.com/huggingface/datasets/pull/3169.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3169.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3168
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3168/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3168/comments
https://api.github.com/repos/huggingface/datasets/issues/3168/events
https://github.com/huggingface/datasets/issues/3168
1,036,673,263
I_kwDODunzps49ymDv
3,168
OpenSLR/83 is empty
{ "login": "tyrius02", "id": 4561309, "node_id": "MDQ6VXNlcjQ1NjEzMDk=", "avatar_url": "https://avatars.githubusercontent.com/u/4561309?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tyrius02", "html_url": "https://github.com/tyrius02", "followers_url": "https://api.github.com/users/tyrius02/followers", "following_url": "https://api.github.com/users/tyrius02/following{/other_user}", "gists_url": "https://api.github.com/users/tyrius02/gists{/gist_id}", "starred_url": "https://api.github.com/users/tyrius02/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tyrius02/subscriptions", "organizations_url": "https://api.github.com/users/tyrius02/orgs", "repos_url": "https://api.github.com/users/tyrius02/repos", "events_url": "https://api.github.com/users/tyrius02/events{/privacy}", "received_events_url": "https://api.github.com/users/tyrius02/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "tyrius02", "id": 4561309, "node_id": "MDQ6VXNlcjQ1NjEzMDk=", "avatar_url": "https://avatars.githubusercontent.com/u/4561309?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tyrius02", "html_url": "https://github.com/tyrius02", "followers_url": "https://api.github.com/users/tyrius02/followers", "following_url": "https://api.github.com/users/tyrius02/following{/other_user}", "gists_url": "https://api.github.com/users/tyrius02/gists{/gist_id}", "starred_url": "https://api.github.com/users/tyrius02/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tyrius02/subscriptions", "organizations_url": "https://api.github.com/users/tyrius02/orgs", "repos_url": "https://api.github.com/users/tyrius02/repos", "events_url": "https://api.github.com/users/tyrius02/events{/privacy}", "received_events_url": "https://api.github.com/users/tyrius02/received_events", "type": "User", "site_admin": false }
[ { "login": "tyrius02", "id": 4561309, "node_id": "MDQ6VXNlcjQ1NjEzMDk=", "avatar_url": "https://avatars.githubusercontent.com/u/4561309?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tyrius02", "html_url": "https://github.com/tyrius02", "followers_url": "https://api.github.com/users/tyrius02/followers", "following_url": "https://api.github.com/users/tyrius02/following{/other_user}", "gists_url": "https://api.github.com/users/tyrius02/gists{/gist_id}", "starred_url": "https://api.github.com/users/tyrius02/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tyrius02/subscriptions", "organizations_url": "https://api.github.com/users/tyrius02/orgs", "repos_url": "https://api.github.com/users/tyrius02/repos", "events_url": "https://api.github.com/users/tyrius02/events{/privacy}", "received_events_url": "https://api.github.com/users/tyrius02/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @tyrius02, thanks for reporting. I see you self-assigned this issue: are you working on this?", "@albertvillanova Yes. Figured I introduced the broken config, I should fix it too.\r\n\r\nI've got it working, but I'm struggling with one of the tests. I've started a PR so I/we can work through it.", "Looks like the tests all passed on the PR." ]
"2021-10-26T19:42:21"
"2021-10-29T10:04:09"
"2021-10-29T10:04:09"
CONTRIBUTOR
null
## Describe the bug As the summary says, openslr / SLR83 / train is empty. The dataset returned after loading indicates there are **zero** rows. The correct number should be **17877**. ## Steps to reproduce the bug ```python import datasets datasets.load_dataset('openslr', 'SLR83') ``` ## Expected results ``` DatasetDict({ train: Dataset({ features: ['path', 'audio', 'sentence'], num_rows: 17877 }) }) ``` ## Actual results ``` DatasetDict({ train: Dataset({ features: ['path', 'audio', 'sentence'], num_rows: 0 }) }) ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.14.1.dev0 (master HEAD) - Platform: Ubuntu 20.04 - Python version: 3.7.10 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3168/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3168/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3167
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3167/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3167/comments
https://api.github.com/repos/huggingface/datasets/issues/3167/events
https://github.com/huggingface/datasets/issues/3167
1,036,488,992
I_kwDODunzps49x5Eg
3,167
bookcorpusopen no longer works
{ "login": "lucadiliello", "id": 23355969, "node_id": "MDQ6VXNlcjIzMzU1OTY5", "avatar_url": "https://avatars.githubusercontent.com/u/23355969?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lucadiliello", "html_url": "https://github.com/lucadiliello", "followers_url": "https://api.github.com/users/lucadiliello/followers", "following_url": "https://api.github.com/users/lucadiliello/following{/other_user}", "gists_url": "https://api.github.com/users/lucadiliello/gists{/gist_id}", "starred_url": "https://api.github.com/users/lucadiliello/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lucadiliello/subscriptions", "organizations_url": "https://api.github.com/users/lucadiliello/orgs", "repos_url": "https://api.github.com/users/lucadiliello/repos", "events_url": "https://api.github.com/users/lucadiliello/events{/privacy}", "received_events_url": "https://api.github.com/users/lucadiliello/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi ! Thanks for reporting :) I think #3280 should fix this", "I tried with the latest changes from #3280 on google colab and it worked fine :)\r\nWe'll do a new release soon, in the meantime you can use the updated version with:\r\n```python\r\nload_dataset(\"bookcorpusopen\", revision=\"master\")\r\n```", "Fixed by #3280." ]
"2021-10-26T16:06:15"
"2021-11-17T15:53:46"
"2021-11-17T15:53:46"
CONTRIBUTOR
null
## Describe the bug When using the latest version of datasets (1.14.0), I cannot use the `bookcorpusopen` dataset. The process blocks always around `9924 examples [00:06, 1439.61 examples/s]` when preparing the dataset. I also noticed that after half an hour the process is automatically killed because of the RAM usage (the machine has 1TB of RAM...). This did not happen with 1.4.1. I tried also `rm -rf ~/.cache/huggingface` but did not help. Changing python version between 3.7, 3.8 and 3.9 did not help too. ## Steps to reproduce the bug ```python import datasets d = datasets.load_dataset('bookcorpusopen') ``` ## Expected results A clear and concise description of the expected results. ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.14.0 - Platform: Linux-5.4.0-1054-aws-x86_64-with-glibc2.27 - Python version: 3.9.7 - PyArrow version: 4.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3167/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3167/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3166
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3166/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3166/comments
https://api.github.com/repos/huggingface/datasets/issues/3166/events
https://github.com/huggingface/datasets/pull/3166
1,036,450,283
PR_kwDODunzps4tsVQJ
3,166
Deprecate prepare_module
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-10-26T15:28:24"
"2021-11-05T09:27:37"
"2021-11-05T09:27:36"
MEMBER
null
In version 1.13, `prepare_module` was deprecated. This PR adds a deprecation warning and removes it from all the library, using `dataset_module_factory` or `metric_module_factory` instead. Fix #3165.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3166/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3166/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3166", "html_url": "https://github.com/huggingface/datasets/pull/3166", "diff_url": "https://github.com/huggingface/datasets/pull/3166.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3166.patch", "merged_at": "2021-11-05T09:27:36" }
true
https://api.github.com/repos/huggingface/datasets/issues/3165
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3165/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3165/comments
https://api.github.com/repos/huggingface/datasets/issues/3165/events
https://github.com/huggingface/datasets/issues/3165
1,036,448,998
I_kwDODunzps49xvTm
3,165
Deprecate prepare_module
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
"2021-10-26T15:27:15"
"2021-11-05T09:27:36"
"2021-11-05T09:27:36"
MEMBER
null
In version 1.13, `prepare_module` was deprecated. Add deprecation warning and remove its usage from all the library.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3165/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3165/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3164
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3164/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3164/comments
https://api.github.com/repos/huggingface/datasets/issues/3164/events
https://github.com/huggingface/datasets/issues/3164
1,035,662,830
I_kwDODunzps49uvXu
3,164
Add raw data files to the Hub with GitHub LFS for canonical dataset
{ "login": "zlucia", "id": 40370937, "node_id": "MDQ6VXNlcjQwMzcwOTM3", "avatar_url": "https://avatars.githubusercontent.com/u/40370937?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zlucia", "html_url": "https://github.com/zlucia", "followers_url": "https://api.github.com/users/zlucia/followers", "following_url": "https://api.github.com/users/zlucia/following{/other_user}", "gists_url": "https://api.github.com/users/zlucia/gists{/gist_id}", "starred_url": "https://api.github.com/users/zlucia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zlucia/subscriptions", "organizations_url": "https://api.github.com/users/zlucia/orgs", "repos_url": "https://api.github.com/users/zlucia/repos", "events_url": "https://api.github.com/users/zlucia/events{/privacy}", "received_events_url": "https://api.github.com/users/zlucia/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @zlucia, I would actually suggest hosting the dataset as a huggingface.co-hosted dataset.\r\n\r\nThe only difference with a \"canonical\"/legacy dataset is that it's nested under an organization (here `stanford` or `stanfordnlp` for instance – completely up to you) but then you can upload your data using git-lfs (unlike \"canonical\" datasets where we don't host the data)\r\n\r\nLet me know if this fits your use case!\r\n\r\ncc'ing @osanseviero @lhoestq and rest of the team πŸ€—", "Hi @zlucia,\r\n\r\nAs @julien-c pointed out, the way to store/host raw data files in our Hub is by using what we call \"community\" datasets:\r\n- either at your personal namespace: `load_dataset(\"zlucia/casehold\")`\r\n- or at an organization namespace: for example, if you create the organization `reglab`, then `load_dataset(\"reglab/casehold\")`\r\n\r\nPlease note that \"canonical\" datasets do not normally store/host their raw data at our Hub, but in a third-party server. For \"canonical\" datasets, we just host the \"loading script\", that is, a Python script that downloads the raw data from a third-party server, creates the HuggingFace dataset from it and caches it locally.\r\n\r\nIn order to create an organization namespace in our Hub, please follow this link: https://huggingface.co/organizations/new\r\n\r\nThere are already many organizations at our Hub (complete list here: https://huggingface.co/organizations), such as:\r\n- Stanford CRFM: https://huggingface.co/stanford-crfm\r\n- Stanford NLP: https://huggingface.co/stanfordnlp\r\n- Stanford CS329S: Machine Learning Systems Design: https://huggingface.co/stanford-cs329s\r\n\r\nAlso note that you in your organization namespace:\r\n- you can add any number of members\r\n- you can store both raw datasets and models, and those can be immediately accessed using `datasets` and `transformers`\r\n\r\nOnce you have created an organization, these are the steps to upload/host a raw dataset: \r\n- The no-code procedure: https://huggingface.co/docs/datasets/upload_dataset.html\r\n- Using the command line (terminal): https://huggingface.co/docs/datasets/share.html#add-a-community-dataset\r\n\r\nPlease, feel free to ping me if you have any further questions or need help.\r\n", "Ah I see, I think I was unclear whether there were benefits to uploading a canonical dataset vs. a community provided dataset. Thanks for clarifying. I'll see if we want to create an organization namespace and otherwise, will upload the dataset under my personal namespace." ]
"2021-10-25T23:28:21"
"2021-10-30T19:54:51"
"2021-10-30T19:54:51"
NONE
null
I'm interested in sharing the CaseHOLD dataset (https://arxiv.org/abs/2104.08671) as a canonical dataset on the HuggingFace Hub and would like to add the raw data files to the Hub with GitHub LFS, since it seems like a more sustainable long term storage solution, compared to other storage solutions available to my team. From what I can tell, this option is not immediately supported if one follows the sharing steps detailed here: [https://huggingface.co/docs/datasets/share_dataset.html#sharing-a-canonical-dataset](https://huggingface.co/docs/datasets/share_dataset.html#sharing-a-canonical-dataset), since GitHub LFS is not supported for public forks. Is there a way to request this? Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3164/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3164/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3163
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3163/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3163/comments
https://api.github.com/repos/huggingface/datasets/issues/3163/events
https://github.com/huggingface/datasets/pull/3163
1,035,475,061
PR_kwDODunzps4tpI44
3,163
Add Image feature
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-10-25T19:07:48"
"2021-12-30T06:37:21"
"2021-12-06T17:49:02"
CONTRIBUTOR
null
Adds the Image feature. This feature is heavily inspired by the recently added Audio feature (#2324). Currently, this PR is pretty simple. Some considerations that need further discussion: * I've decided to use `Pillow`/`PIL` as the image decoding library. Another candidate I considered is `torchvision`, mostly because of its `accimage` backend, which should be faster for loading `jpeg` images than `Pillow`. However, `torchvision`'s io module only supports png and jpeg images, has `torch` as a hard dependency, and requires magic to work with image bytes ( `torch.ByteTensor(torch.ByteStorage.from_buffer(image_bytes)))`). * Currently, I'm converting `PIL`'s `Image` type to `np.ndarray`. The vision models in Transformers such as ViT prefer the raw `Image` type and not the decoded tensors, so there is a small overhead due to [this conversion](https://github.com/huggingface/transformers/blob/3e8761ab8077e3bb243fe2f78b2a682bd2257cf1/src/transformers/image_utils.py#L62-L73). IMO this is justified to keep this part aligned with the Audio feature, which also returns `np.ndarray`. What do you think? * Still have to work on the channel decoding logic: * PyTorch prefers the channel-first ordering (C, H, W); TF and Flax the channel-last ordering (H, W, C). One cool feature would be adjusting the channel order based on the selected formatter (`torch`, `tf`, `jax`). * By default, `Image.open` returns images of shape (H, W, C). However, ViT's feature extractor expects the format (C, H, W) if the image is passed as an array (explained [here](https://huggingface.co/transformers/model_doc/vit.html#transformers.ViTFeatureExtractor.__call__)), so I'm more inclined to the format (C, H, W). Which one do you prefer, (C, H, W) or (H, W, C)? * Are there any options you'd like to see? (the user could change those via `cast_column`, such as `sampling_rate` in the Audio feature) TODOs: * [x] tests * in subsequent PRs: * docs - a section in the docs, which gives some additional info on the Image and Audio feature and compares them to `ArrayND` * streaming (waiting for #3129 and #3133 to get merged first) * update the image tasks and the datasets to use the new feature * Image/Audio formatting [Colab Notebook](https://colab.research.google.com/drive/1mIrTnqTVkWLJWoBzT1ABSe-LFelIep1c?usp=sharing) where you can play with this feature. I'm also adding a link to the [Image](https://github.com/tensorflow/datasets/blob/7ac7d506488d46038a5854961d068926b3f93c7f/tensorflow_datasets/core/features/image_feature.py#L155) feature in TFDS because one of our goals is to parse TFDS scripts eventually, so our Image feature has to (at least) support all the formats theirs does. Feel free to cc anyone who might be interested. P.S. Please ignore the changes in the `datasets/**/*.py` files πŸ˜„.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3163/reactions", "total_count": 8, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 7, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/3163/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3163", "html_url": "https://github.com/huggingface/datasets/pull/3163", "diff_url": "https://github.com/huggingface/datasets/pull/3163.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3163.patch", "merged_at": "2021-12-06T17:49:02" }
true
https://api.github.com/repos/huggingface/datasets/issues/3161
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3161/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3161/comments
https://api.github.com/repos/huggingface/datasets/issues/3161/events
https://github.com/huggingface/datasets/pull/3161
1,035,444,292
PR_kwDODunzps4tpCsm
3,161
Add riddle_sense dataset
{ "login": "ziyiwu9494", "id": 44691149, "node_id": "MDQ6VXNlcjQ0NjkxMTQ5", "avatar_url": "https://avatars.githubusercontent.com/u/44691149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ziyiwu9494", "html_url": "https://github.com/ziyiwu9494", "followers_url": "https://api.github.com/users/ziyiwu9494/followers", "following_url": "https://api.github.com/users/ziyiwu9494/following{/other_user}", "gists_url": "https://api.github.com/users/ziyiwu9494/gists{/gist_id}", "starred_url": "https://api.github.com/users/ziyiwu9494/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ziyiwu9494/subscriptions", "organizations_url": "https://api.github.com/users/ziyiwu9494/orgs", "repos_url": "https://api.github.com/users/ziyiwu9494/repos", "events_url": "https://api.github.com/users/ziyiwu9494/events{/privacy}", "received_events_url": "https://api.github.com/users/ziyiwu9494/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-10-25T18:30:56"
"2021-11-04T14:01:15"
"2021-11-04T14:01:15"
CONTRIBUTOR
null
Adding a new dataset for QA with riddles. I'm confused about the tagging process because it looks like the streamlit app loads data from the current repo, so is it something that should be done after merging or off my fork?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3161/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3161/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3161", "html_url": "https://github.com/huggingface/datasets/pull/3161", "diff_url": "https://github.com/huggingface/datasets/pull/3161.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3161.patch", "merged_at": "2021-11-04T14:01:14" }
true
https://api.github.com/repos/huggingface/datasets/issues/3160
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3160/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3160/comments
https://api.github.com/repos/huggingface/datasets/issues/3160/events
https://github.com/huggingface/datasets/pull/3160
1,035,274,640
PR_kwDODunzps4tofO0
3,160
Better error msg if `len(predictions)` doesn't match `len(references)` in metrics
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-10-25T15:25:05"
"2021-11-05T11:44:59"
"2021-11-05T09:31:02"
CONTRIBUTOR
null
Improve the error message in `Metric.add_batch` if `len(predictions)` doesn't match `len(references)`. cc: @BramVanroy (feel free to test this code on your examples and review this PR)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3160/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3160/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3160", "html_url": "https://github.com/huggingface/datasets/pull/3160", "diff_url": "https://github.com/huggingface/datasets/pull/3160.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3160.patch", "merged_at": "2021-11-05T09:31:02" }
true
https://api.github.com/repos/huggingface/datasets/issues/3159
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3159/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3159/comments
https://api.github.com/repos/huggingface/datasets/issues/3159/events
https://github.com/huggingface/datasets/pull/3159
1,035,174,560
PR_kwDODunzps4toKD5
3,159
Make inspect.get_dataset_config_names always return a non-empty list
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-10-25T13:59:43"
"2021-10-29T13:14:37"
"2021-10-28T05:44:49"
MEMBER
null
Make all named configs cases, so that no special unnamed config case needs to be handled differently. Fix #3135.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3159/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3159/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3159", "html_url": "https://github.com/huggingface/datasets/pull/3159", "diff_url": "https://github.com/huggingface/datasets/pull/3159.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3159.patch", "merged_at": "2021-10-28T05:44:49" }
true
https://api.github.com/repos/huggingface/datasets/issues/3158
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3158/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3158/comments
https://api.github.com/repos/huggingface/datasets/issues/3158/events
https://github.com/huggingface/datasets/pull/3158
1,035,158,070
PR_kwDODunzps4toGpe
3,158
Fix string encoding for Value type
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-10-25T13:44:13"
"2021-10-25T14:12:06"
"2021-10-25T14:12:05"
MEMBER
null
Some metrics have `string` features but currently it fails if users pass integers instead. Indeed feature encoding that handles the conversion of the user's objects to the right python type is missing a case for `string`, while it already works as expected for integers, floats and booleans Here is an example code that didn't work previously, but that works with this fix: ```python import datasets # Note that 'id' is an integer while the SQuAD metric uses strings predictions = [{'prediction_text': '1976', 'id': 5}] references = [{'answers': {'answer_start': [97], 'text': ['1976']}, 'id': 5}] squad_metric = datasets.load_metric("squad") squad_metric.add_batch(predictions=predictions, references=references) results = squad_metric.compute() # {'exact_match': 100.0, 'f1': 100.0} ``` cc @sgugger @philschmid
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3158/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3158/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3158", "html_url": "https://github.com/huggingface/datasets/pull/3158", "diff_url": "https://github.com/huggingface/datasets/pull/3158.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3158.patch", "merged_at": "2021-10-25T14:12:05" }
true
https://api.github.com/repos/huggingface/datasets/issues/3157
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3157/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3157/comments
https://api.github.com/repos/huggingface/datasets/issues/3157/events
https://github.com/huggingface/datasets/pull/3157
1,034,775,165
PR_kwDODunzps4tm3_I
3,157
Fixed: duplicate parameter and missing parameter in docstring
{ "login": "PanQiWei", "id": 46810637, "node_id": "MDQ6VXNlcjQ2ODEwNjM3", "avatar_url": "https://avatars.githubusercontent.com/u/46810637?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PanQiWei", "html_url": "https://github.com/PanQiWei", "followers_url": "https://api.github.com/users/PanQiWei/followers", "following_url": "https://api.github.com/users/PanQiWei/following{/other_user}", "gists_url": "https://api.github.com/users/PanQiWei/gists{/gist_id}", "starred_url": "https://api.github.com/users/PanQiWei/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PanQiWei/subscriptions", "organizations_url": "https://api.github.com/users/PanQiWei/orgs", "repos_url": "https://api.github.com/users/PanQiWei/repos", "events_url": "https://api.github.com/users/PanQiWei/events{/privacy}", "received_events_url": "https://api.github.com/users/PanQiWei/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-10-25T07:26:00"
"2021-10-25T14:02:19"
"2021-10-25T14:02:19"
CONTRIBUTOR
null
changing duplicate parameter `data_files` in `DatasetBuilder.__init__` to the missing parameter `data_dir`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3157/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3157/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3157", "html_url": "https://github.com/huggingface/datasets/pull/3157", "diff_url": "https://github.com/huggingface/datasets/pull/3157.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3157.patch", "merged_at": "2021-10-25T14:02:18" }
true
https://api.github.com/repos/huggingface/datasets/issues/3155
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3155/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3155/comments
https://api.github.com/repos/huggingface/datasets/issues/3155/events
https://github.com/huggingface/datasets/issues/3155
1,034,468,757
I_kwDODunzps49qL2V
3,155
Illegal instruction (core dumped) at datasets import
{ "login": "hacobe", "id": 91226467, "node_id": "MDQ6VXNlcjkxMjI2NDY3", "avatar_url": "https://avatars.githubusercontent.com/u/91226467?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hacobe", "html_url": "https://github.com/hacobe", "followers_url": "https://api.github.com/users/hacobe/followers", "following_url": "https://api.github.com/users/hacobe/following{/other_user}", "gists_url": "https://api.github.com/users/hacobe/gists{/gist_id}", "starred_url": "https://api.github.com/users/hacobe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hacobe/subscriptions", "organizations_url": "https://api.github.com/users/hacobe/orgs", "repos_url": "https://api.github.com/users/hacobe/repos", "events_url": "https://api.github.com/users/hacobe/events{/privacy}", "received_events_url": "https://api.github.com/users/hacobe/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "It seems to be an issue with how conda-forge is building the binaries. It works on some machines, but not a machine with AMD Opteron 8384 processors." ]
"2021-10-24T17:21:36"
"2021-11-18T19:07:04"
"2021-11-18T19:07:03"
CONTRIBUTOR
null
## Describe the bug I install datasets using conda and when I import datasets I get: "Illegal instruction (core dumped)" ## Steps to reproduce the bug ``` conda create --prefix path/to/env conda activate path/to/env conda install -c huggingface -c conda-forge datasets # exits with output "Illegal instruction (core dumped)" python -m datasets ``` ## Environment info When I run "datasets-cli env", I also get "Illegal instruction (core dumped)" If I run the following commands: ``` conda create --prefix path/to/another/new/env conda activate path/to/another/new/env conda install -c huggingface transformers transformers-cli env ``` Then I get: - `transformers` version: 4.11.3 - Platform: Linux-5.4.0-67-generic-x86_64-with-glibc2.17 - Python version: 3.8.12 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No Let me know what additional information you need in order to debug this issue. Thanks in advance!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3155/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3155/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3154
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3154/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3154/comments
https://api.github.com/repos/huggingface/datasets/issues/3154/events
https://github.com/huggingface/datasets/issues/3154
1,034,361,806
I_kwDODunzps49pxvO
3,154
Sacrebleu unexpected behaviour/requirement for data format
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi @BramVanroy!\r\n\r\nGood question. This project relies on PyArrow (tables) to store data too big to fit in RAM. In the case of metrics, this means that the number of predictions and references has to match to form a table.\r\n\r\nThat's why your example throws an error even though it matches the schema:\r\n```python\r\nrefs = [\r\n ['The dog bit the man.', 'It was not unexpected.', 'The man bit him first.'],\r\n ['The dog had bit the man.', 'No one was surprised.', 'The man had bitten the dog.'],\r\n] # len(refs) = 2\r\n\r\nhyps = ['The dog bit the man.', \"It wasn't surprising.\", 'The man had just bitten him.'] # len(hyps) = 3\r\n```\r\n\r\nInstead, it should be:\r\n```python\r\nrefs = [\r\n ['The dog bit the man.', 'The dog had bit the man.'],\r\n ['It was not unexpected.', 'No one was surprised.'],\r\n ['The man bit him first.', 'The man had bitten the dog.'], \r\n] # len(refs) = 3\r\n\r\nhyps = ['The dog bit the man.', \"It wasn't surprising.\", 'The man had just bitten him.'] # len(hyps) = 3\r\n```\r\n\r\nHowever, `sacreblue` works with the format that's described in your example, hence this part:\r\nhttps://github.com/huggingface/datasets/blob/87c71b9c29a40958973004910f97e4892559dfed/metrics/sacrebleu/sacrebleu.py#L94-L99\r\n\r\nHope you get an idea!", "Thanks, that makes sense. It is a bit unfortunate because it may be confusing to users since the input format is suddenly different than what they may expect from the underlying library/metric. But it is understandable due to how `datasets` works!" ]
"2021-10-24T08:55:33"
"2021-10-31T09:08:32"
"2021-10-31T09:08:31"
CONTRIBUTOR
null
## Describe the bug When comparing with the original `sacrebleu` implementation, the `datasets` implementation does some strange things that I do not quite understand. This issue was triggered when I was trying to implement TER and found the datasets implementation of BLEU [here](https://github.com/huggingface/datasets/pull/3153). In the below snippet, the original sacrebleu snippet works just fine whereas the datasets implementation throws an error. ## Steps to reproduce the bug ```python import sacrebleu import datasets refs = [ ['The dog bit the man.', 'It was not unexpected.', 'The man bit him first.'], ['The dog had bit the man.', 'No one was surprised.', 'The man had bitten the dog.'], ] hyps = ['The dog bit the man.', "It wasn't surprising.", 'The man had just bitten him.'] expected_bleu = 48.530827 ds_bleu = datasets.load_metric("sacrebleu") bleu_score_sb = sacrebleu.corpus_bleu(hyps, refs).score print(bleu_score_sb, expected_bleu) # works: 48.5308... bleu_score_ds = ds_bleu.compute(predictions=hyps, references=refs)["score"] print(bleu_score_ds, expected_bleu) # ValueError: Predictions and/or references don't match the expected format. ``` This seems to be related to how datasets forces the features format here: https://github.com/huggingface/datasets/blob/87c71b9c29a40958973004910f97e4892559dfed/metrics/sacrebleu/sacrebleu.py#L94-L99 and then manipulates the references during the compute stage here https://github.com/huggingface/datasets/blob/87c71b9c29a40958973004910f97e4892559dfed/metrics/sacrebleu/sacrebleu.py#L119-L122 I do not quite understand why that is required since sacrebleu handles argument parsing quite well [by itself](https://github.com/mjpost/sacrebleu/blob/2787185dd0f8d224c72ee5a831d163c2ac711a47/sacrebleu/metrics/base.py#L229). ## Actual results Traceback (most recent call last): File "C:\Users\bramv\AppData\Roaming\JetBrains\PyCharm2020.3\scratches\scratch_23.py", line 23, in <module> bleu_score_ds = ds_bleu.compute(predictions=hyps, references=refs)["score"] File "C:\dev\python\datasets\src\datasets\metric.py", line 392, in compute self.add_batch(predictions=predictions, references=references) File "C:\dev\python\datasets\src\datasets\metric.py", line 439, in add_batch raise ValueError( ValueError: Predictions and/or references don't match the expected format. Expected format: {'predictions': Value(dtype='string', id='sequence'), 'references': Sequence(feature=Value(dtype='string', id='sequence'), length=-1, id='references')}, Input predictions: ['The dog bit the man.', "It wasn't surprising.", 'The man had just bitten him.'], Input references: [['The dog bit the man.', 'It was not unexpected.', 'The man bit him first.'], ['The dog had bit the man.', 'No one was surprised.', 'The man had bitten the dog.']] ## Environment info - `datasets` version: 1.14.1.dev0 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.9.2 - PyArrow version: 4.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3154/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3154/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3153
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3153/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3153/comments
https://api.github.com/repos/huggingface/datasets/issues/3153/events
https://github.com/huggingface/datasets/pull/3153
1,034,179,198
PR_kwDODunzps4tlEVE
3,153
Add TER (as implemented in sacrebleu)
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-10-23T14:26:45"
"2021-11-02T11:04:11"
"2021-11-02T11:04:11"
CONTRIBUTOR
null
Implements TER (Translation Edit Rate) as per its implementation in sacrebleu. Sacrebleu for BLEU scores is already implemented in `datasets` so I thought this would be a nice addition. I started from the sacrebleu implementation, as the two metrics have a lot in common. Verified with sacrebleu's [testing suite](https://github.com/mjpost/sacrebleu/blob/078c440168c6adc89ba75fe6d63f0d922d42bcfe/test/test_ter.py) that this indeed works as intended. ```python import datasets test_cases = [ (['aaaa bbbb cccc dddd'], ['aaaa bbbb cccc dddd'], 0), # perfect match (['dddd eeee ffff'], ['aaaa bbbb cccc'], 1), # no overlap ([''], ['a'], 1), # corner case, empty hypothesis (['d e f g h a b c'], ['a b c d e f g h'], 1 / 8), # a single shift fixes MT ( [ 'wΓ€hlen Sie " Bild neu berechnen , " um beim Γ„ndern der Bildgrâße Pixel hinzuzufΓΌgen oder zu entfernen , damit das Bild ungefΓ€hr dieselbe Grâße aufweist wie die andere Grâße .', 'wenn Sie alle Aufgaben im aktuellen Dokument aktualisieren mΓΆchten , wΓ€hlen Sie im MenΓΌ des Aufgabenbedienfelds die Option " Alle Aufgaben aktualisieren . "', 'klicken Sie auf der Registerkarte " Optionen " auf die SchaltflΓ€che " Benutzerdefiniert " und geben Sie Werte fΓΌr " Fehlerkorrektur-Level " und " Y / X-VerhΓ€ltnis " ein .', 'Sie kΓΆnnen beispielsweise ein Dokument erstellen , das ein Auto ΓΌber die BΓΌhne enthΓ€lt .', 'wΓ€hlen Sie im Dialogfeld " Neu aus Vorlage " eine Vorlage aus und klicken Sie auf " Neu . "', ], [ 'wΓ€hlen Sie " Bild neu berechnen , " um beim Γ„ndern der Bildgrâße Pixel hinzuzufΓΌgen oder zu entfernen , damit die Darstellung des Bildes in einer anderen Grâße beibehalten wird .', 'wenn Sie alle Aufgaben im aktuellen Dokument aktualisieren mΓΆchten , wΓ€hlen Sie im MenΓΌ des Aufgabenbedienfelds die Option " Alle Aufgaben aktualisieren . "', 'klicken Sie auf der Registerkarte " Optionen " auf die SchaltflΓ€che " Benutzerdefiniert " und geben Sie fΓΌr " Fehlerkorrektur-Level " und " Y / X-VerhΓ€ltnis " niedrigere Werte ein .', 'Sie kΓΆnnen beispielsweise ein Dokument erstellen , das ein Auto enthalt , das sich ΓΌber die BΓΌhne bewegt .', 'wΓ€hlen Sie im Dialogfeld " Neu aus Vorlage " eine Vorlage aus und klicken Sie auf " Neu . "', ], 0.136 # realistic example from WMT dev data (2019) ), ] ter = datasets.load_metric(r"path\to\datasets\metrics\ter") predictions = ["hello there general kenobi", "foo bar foobar"] references = [["hello there general kenobi", "hello there !"], ["foo bar foobar", "foo bar foobar"]] print(ter.compute(predictions=predictions, references=references)) for hyp, ref, score in test_cases: # Note the reference transformation which is different from scarebleu's input format results = ter.compute(predictions=hyp, references=[[r] for r in ref]) assert 100*score == results["score"], f"expected {100*score}, got {results['score']}" ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3153/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3153/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3153", "html_url": "https://github.com/huggingface/datasets/pull/3153", "diff_url": "https://github.com/huggingface/datasets/pull/3153.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3153.patch", "merged_at": "2021-11-02T11:04:11" }
true
https://api.github.com/repos/huggingface/datasets/issues/3152
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3152/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3152/comments
https://api.github.com/repos/huggingface/datasets/issues/3152/events
https://github.com/huggingface/datasets/pull/3152
1,034,039,379
PR_kwDODunzps4tkqi-
3,152
Fix some typos in the documentation
{ "login": "h4iku", "id": 3812788, "node_id": "MDQ6VXNlcjM4MTI3ODg=", "avatar_url": "https://avatars.githubusercontent.com/u/3812788?v=4", "gravatar_id": "", "url": "https://api.github.com/users/h4iku", "html_url": "https://github.com/h4iku", "followers_url": "https://api.github.com/users/h4iku/followers", "following_url": "https://api.github.com/users/h4iku/following{/other_user}", "gists_url": "https://api.github.com/users/h4iku/gists{/gist_id}", "starred_url": "https://api.github.com/users/h4iku/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/h4iku/subscriptions", "organizations_url": "https://api.github.com/users/h4iku/orgs", "repos_url": "https://api.github.com/users/h4iku/repos", "events_url": "https://api.github.com/users/h4iku/events{/privacy}", "received_events_url": "https://api.github.com/users/h4iku/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-10-23T01:38:35"
"2021-10-25T14:27:36"
"2021-10-25T14:03:48"
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3152/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3152/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3152", "html_url": "https://github.com/huggingface/datasets/pull/3152", "diff_url": "https://github.com/huggingface/datasets/pull/3152.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3152.patch", "merged_at": "2021-10-25T14:03:48" }
true
https://api.github.com/repos/huggingface/datasets/issues/3151
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3151/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3151/comments
https://api.github.com/repos/huggingface/datasets/issues/3151/events
https://github.com/huggingface/datasets/pull/3151
1,033,890,501
PR_kwDODunzps4tkL7t
3,151
Re-add faiss to windows testing suite
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-10-22T19:34:29"
"2021-11-02T10:47:34"
"2021-11-02T10:06:03"
CONTRIBUTOR
null
In recent versions, `faiss-cpu` seems to be available for Windows as well. See the [PyPi page](https://pypi.org/project/faiss-cpu/#files) to confirm. We can therefore included it for Windows in the setup file. At first tests didn't pass due to problems with permissions as caused by `NamedTemporaryFile` on Windows. This built-in library is notoriously poor in playing nice on Windows. The required change isn't pretty, but it works. First set `delete=False` to not automatically try to delete the file on `exit`. Then, manually delete the file with `unlink`. It's weird, I know, but it works. ```python with tempfile.NamedTemporaryFile(delete=False) as tmp_file: # do stuff os.unlink(tmp_file.name) ``` closes #3150
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3151/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3151/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3151", "html_url": "https://github.com/huggingface/datasets/pull/3151", "diff_url": "https://github.com/huggingface/datasets/pull/3151.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3151.patch", "merged_at": "2021-11-02T10:06:03" }
true
https://api.github.com/repos/huggingface/datasets/issues/3150
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3150/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3150/comments
https://api.github.com/repos/huggingface/datasets/issues/3150/events
https://github.com/huggingface/datasets/issues/3150
1,033,831,530
I_kwDODunzps49nwRq
3,150
Faiss _is_ available on Windows
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Sure, feel free to open a PR." ]
"2021-10-22T18:07:16"
"2021-11-02T10:06:03"
"2021-11-02T10:06:03"
CONTRIBUTOR
null
In the setup file, I find the following: https://github.com/huggingface/datasets/blob/87c71b9c29a40958973004910f97e4892559dfed/setup.py#L171 However, FAISS does install perfectly fine on Windows on my system. You can also confirm this on the [PyPi page](https://pypi.org/project/faiss-cpu/#files), where Windows wheels are available. Maybe this was true for older versions? For current versions, this can be removed I think. (This isn't really a bug but didn't know how else to tag.) If you agree I can do a quick PR and remove that line.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3150/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3150/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3149
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3149/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3149/comments
https://api.github.com/repos/huggingface/datasets/issues/3149/events
https://github.com/huggingface/datasets/pull/3149
1,033,747,625
PR_kwDODunzps4tjuUt
3,149
Add CMU Hinglish DoG Dataset for MT
{ "login": "Ishan-Kumar2", "id": 46553104, "node_id": "MDQ6VXNlcjQ2NTUzMTA0", "avatar_url": "https://avatars.githubusercontent.com/u/46553104?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ishan-Kumar2", "html_url": "https://github.com/Ishan-Kumar2", "followers_url": "https://api.github.com/users/Ishan-Kumar2/followers", "following_url": "https://api.github.com/users/Ishan-Kumar2/following{/other_user}", "gists_url": "https://api.github.com/users/Ishan-Kumar2/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ishan-Kumar2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ishan-Kumar2/subscriptions", "organizations_url": "https://api.github.com/users/Ishan-Kumar2/orgs", "repos_url": "https://api.github.com/users/Ishan-Kumar2/repos", "events_url": "https://api.github.com/users/Ishan-Kumar2/events{/privacy}", "received_events_url": "https://api.github.com/users/Ishan-Kumar2/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-10-22T16:17:25"
"2021-11-15T11:36:42"
"2021-11-15T10:27:45"
CONTRIBUTOR
null
Address part of #2841 Added the CMU Hinglish DoG Dataset as in GLUECoS. Added it as a seperate dataset as unlike other tasks of GLUE CoS this can't be evaluated for a BERT like model. Consists of parallel dataset between Hinglish (Hindi-English) and English, can be used for Machine Translation between the two. The data processing part is inspired from the GLUECoS repo [here](https://github.com/microsoft/GLUECoS/blob/7fdc51653e37a32aee17505c47b7d1da364fa77e/Data/Preprocess_Scripts/preprocess_mt_en_hi.py) The dummy data part is not working properly, it shows ``` UnboundLocalError: local variable 'generator_splits' referenced before assignment ``` when I run without ``--auto_generate``. Please let me know how I can fix that. Thanks
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3149/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3149/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3149", "html_url": "https://github.com/huggingface/datasets/pull/3149", "diff_url": "https://github.com/huggingface/datasets/pull/3149.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3149.patch", "merged_at": "2021-11-15T10:27:45" }
true
https://api.github.com/repos/huggingface/datasets/issues/3148
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3148/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3148/comments
https://api.github.com/repos/huggingface/datasets/issues/3148/events
https://github.com/huggingface/datasets/issues/3148
1,033,685,208
I_kwDODunzps49nMjY
3,148
Streaming with num_workers != 0
{ "login": "justheuristic", "id": 3491902, "node_id": "MDQ6VXNlcjM0OTE5MDI=", "avatar_url": "https://avatars.githubusercontent.com/u/3491902?v=4", "gravatar_id": "", "url": "https://api.github.com/users/justheuristic", "html_url": "https://github.com/justheuristic", "followers_url": "https://api.github.com/users/justheuristic/followers", "following_url": "https://api.github.com/users/justheuristic/following{/other_user}", "gists_url": "https://api.github.com/users/justheuristic/gists{/gist_id}", "starred_url": "https://api.github.com/users/justheuristic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/justheuristic/subscriptions", "organizations_url": "https://api.github.com/users/justheuristic/orgs", "repos_url": "https://api.github.com/users/justheuristic/repos", "events_url": "https://api.github.com/users/justheuristic/events{/privacy}", "received_events_url": "https://api.github.com/users/justheuristic/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "I can confirm that I was able to reproduce the bug. This seems odd given that #3423 reports duplicate data retrieval when `num_workers` and `streaming` are used together, which is obviously different from what is reported here. ", "Any update? A possible solution is to have multiple arrow files as shards, and handle them like what webdatasets does.\r\n![image](https://user-images.githubusercontent.com/11533479/148176637-72746b2c-c122-47aa-bbfe-224b13ee9a71.png)\r\n\r\nPytorch's new dataset RFC is supporting sharding now, which may helps avoid duplicate data under streaming mode. (https://github.com/pytorch/pytorch/blob/master/torch/utils/data/datapipes/iter/grouping.py#L13)\r\n", "Hi ! Thanks for the insights :) Note that in streaming mode there're usually no arrow files. The data are streamed from TAR, ZIP, text, etc. files directly from the web. Though for sharded datasets we can definitely adopt a similar strategy !", "fixed by #4375 " ]
"2021-10-22T15:07:17"
"2022-07-04T12:14:58"
"2022-07-04T12:14:58"
NONE
null
## Describe the bug When using dataset streaming with pytorch DataLoader, the setting num_workers to anything other than 0 causes the code to freeze forever before yielding the first batch. The code owner is likely @lhoestq ## Steps to reproduce the bug For your convenience, we've prepped a colab notebook that reproduces the bug https://colab.research.google.com/drive/1Mgl0oTZSNIE3UeGl_oX9wPCOIxRg19h1?usp=sharing ```python !pip install datasets==1.14.0 should_freeze_forever = True # ^-- set this to True in order to freeze forever, set to False in order to work normally import torch from datasets import load_dataset data = load_dataset("oscar", "unshuffled_deduplicated_bn", split="train", streaming=True) data = data.map(lambda x: {"text": x["text"], "orig": f"oscar[{x['id']}]"}, batched=True) data = data.shuffle(100, seed=1337) data = data.with_format("torch") loader = torch.utils.data.DataLoader(data, batch_size=2, num_workers=2 if should_freeze_forever else 0) # v-- the code should freeze forever at this line for i, row in enumerate(loader): print(row) if i > 10: break print("DONE!") ``` ## Expected results The code should not freeze forever with num_workers=2 ## Actual results The code freezes forever with num_workers=2 ## Environment info - `datasets` version: 1.14.0 (also found in previous versions) - Platform: google colab (also locally) - Python version: 3.7, (also 3.8) - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3148/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3148/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3147
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3147/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3147/comments
https://api.github.com/repos/huggingface/datasets/issues/3147/events
https://github.com/huggingface/datasets/pull/3147
1,033,607,659
PR_kwDODunzps4tjRHG
3,147
Fix CLI test to ignore verfications when saving infos
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-10-22T13:52:46"
"2021-10-27T08:01:50"
"2021-10-27T08:01:49"
MEMBER
null
Fix #3146.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3147/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3147/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3147", "html_url": "https://github.com/huggingface/datasets/pull/3147", "diff_url": "https://github.com/huggingface/datasets/pull/3147.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3147.patch", "merged_at": "2021-10-27T08:01:49" }
true
https://api.github.com/repos/huggingface/datasets/issues/3146
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3146/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3146/comments
https://api.github.com/repos/huggingface/datasets/issues/3146/events
https://github.com/huggingface/datasets/issues/3146
1,033,605,947
I_kwDODunzps49m5M7
3,146
CLI test command throws NonMatchingSplitsSizesError when saving infos
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
"2021-10-22T13:50:53"
"2021-10-27T08:01:49"
"2021-10-27T08:01:49"
MEMBER
null
When trying to generate a datset JSON metadata, a `NonMatchingSplitsSizesError` is thrown: ``` $ datasets-cli test datasets/arabic_billion_words --save_infos --all_configs Testing builder 'Alittihad' (1/10) Downloading and preparing dataset arabic_billion_words/Alittihad (download: 332.13 MiB, generated: Unknown size, post-processed: Unknown size, total: 332.13 MiB) to .cache\arabic_billion_words\Alittihad\1.1.0\8175ff1c9714c6d5d15b1141b6042e5edf048276bb81a9c14e35e149a7a62ae4... Traceback (most recent call last): File "path\huggingface\datasets\.venv\Scripts\datasets-cli-script.py", line 33, in <module> sys.exit(load_entry_point('datasets', 'console_scripts', 'datasets-cli')()) File "path\huggingface\datasets\src\datasets\commands\datasets_cli.py", line 33, in main service.run() File "path\huggingface\datasets\src\datasets\commands\test.py", line 144, in run builder.download_and_prepare( File "path\huggingface\datasets\src\datasets\builder.py", line 607, in download_and_prepare self._download_and_prepare( File "path\huggingface\datasets\src\datasets\builder.py", line 709, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "path\huggingface\datasets\src\datasets\utils\info_utils.py", line 74, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='arabic_billion_words'), 'recorded': SplitInfo(name='train', num_bytes=1601790302, num_examples=349342, dataset_name='arabic_billion_words')}] ``` This is due because a previous run generated a wrong `dataset_info.json`. This error can be avoided by passing `--ignore_verifications`, but I think this should be assumed when passing `--save_infos`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3146/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3146/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3145
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3145/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3145/comments
https://api.github.com/repos/huggingface/datasets/issues/3145/events
https://github.com/huggingface/datasets/issues/3145
1,033,580,009
I_kwDODunzps49my3p
3,145
[when Image type will exist] provide a way to get the data as binary + filename
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
null
[]
null
[ "@severo, maybe somehow related to this PR ?\r\n- #3129", "@severo I'll keep that in mind.\r\n\r\nYou can track progress on the Image feature in #3163 (still in the early stage). ", "Hi ! As discussed with @severo offline it looks like the dataset viewer already supports reading PIL images, so maybe the dataset viewer doesn't need to disable decoding after all", "Fixed with https://github.com/huggingface/datasets/pull/3163" ]
"2021-10-22T13:23:49"
"2021-12-22T11:05:37"
"2021-12-22T11:05:36"
CONTRIBUTOR
null
**Is your feature request related to a problem? Please describe.** When a dataset cell contains a value of type Image (be it from a remote URL, an Array2D/3D, or any other way to represent images), I want to be able to write the image to the disk, with the correct filename, and optionally to know its mimetype, in order to serve it on the web. Note: this issue would apply exactly the same for the `Audio` type. **Describe the solution you'd like** If a "cell" has the type `Image`, provide a way to get the binary content of the file, and the filename, eg as: ```python filename: str data: bytes ``` **Describe alternatives you've considered** A way to write the cell to the disk (passing a local directory), and then return the pathname, filename, and mimetype.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3145/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3145/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3144
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3144/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3144/comments
https://api.github.com/repos/huggingface/datasets/issues/3144/events
https://github.com/huggingface/datasets/issues/3144
1,033,573,760
I_kwDODunzps49mxWA
3,144
Infer the features if missing
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
null
[]
null
[ "Done by @lhoestq here: https://github.com/huggingface/datasets/pull/4500 (https://github.com/huggingface/datasets/pull/4500/files#diff-02930e1d966f4b41f9ddf15d961f16f5466d9bee583138657018c7329f71aa43R1255 in particular)\r\n" ]
"2021-10-22T13:17:33"
"2022-09-08T08:23:10"
"2022-09-08T08:23:10"
CONTRIBUTOR
null
**Is your feature request related to a problem? Please describe.** Some datasets, in particular community datasets, have no info file, thus no features. **Describe the solution you'd like** If a dataset has no features, the first loaded data (5-10 rows) could be used to infer the type. Related: `datasets` would provide a way to load the data, and get the rows AND the features as the result. **Describe alternatives you've considered** The HF hub could also provide some UI to help the dataset maintainers to explicit the types of their rows, or automatically infer them as an initial proposal.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3144/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3144/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3141
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3141/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3141/comments
https://api.github.com/repos/huggingface/datasets/issues/3141/events
https://github.com/huggingface/datasets/pull/3141
1,033,555,910
PR_kwDODunzps4tjGYz
3,141
Fix caching bugs
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-10-22T12:59:25"
"2021-10-22T20:52:08"
"2021-10-22T13:47:05"
CONTRIBUTOR
null
This PR fixes some caching bugs (most likely introduced in the latest refactor): * remove ")" added by accident in the dataset dir name * correctly pass the namespace kwargs in `CachedDatasetModuleFactory` * improve the warning message if `HF_DATASETS_OFFLINE is `True`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3141/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3141/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3141", "html_url": "https://github.com/huggingface/datasets/pull/3141", "diff_url": "https://github.com/huggingface/datasets/pull/3141.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3141.patch", "merged_at": "2021-10-22T13:47:04" }
true
https://api.github.com/repos/huggingface/datasets/issues/3137
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3137/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3137/comments
https://api.github.com/repos/huggingface/datasets/issues/3137/events
https://github.com/huggingface/datasets/pull/3137
1,033,363,652
PR_kwDODunzps4tievk
3,137
Fix numpy deprecation warning for ragged tensors
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-10-22T09:17:46"
"2021-10-22T16:04:15"
"2021-10-22T16:04:14"
MEMBER
null
Numpy shows a deprecation warning when we call `np.array` on a list of ragged tensors without specifying the `dtype`. If their shapes match, the tensors can be collated together, otherwise the resulting array should have `dtype=np.object`. Fix #3084 cc @Rocketknight1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3137/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3137/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3137", "html_url": "https://github.com/huggingface/datasets/pull/3137", "diff_url": "https://github.com/huggingface/datasets/pull/3137.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3137.patch", "merged_at": "2021-10-22T16:04:14" }
true
https://api.github.com/repos/huggingface/datasets/issues/3136
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3136/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3136/comments
https://api.github.com/repos/huggingface/datasets/issues/3136/events
https://github.com/huggingface/datasets/pull/3136
1,033,360,396
PR_kwDODunzps4tieFi
3,136
Fix script of Arabic Billion Words dataset to return all data
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-10-22T09:14:24"
"2021-10-22T13:28:41"
"2021-10-22T13:28:40"
MEMBER
null
The script has a bug and only parses and generates a portion of the entire dataset. This PR fixes the loading script so that is properly parses the entire dataset. Current implementation generates the same number of examples as reported in the [original paper](https://arxiv.org/abs/1611.04033) for all configurations except for one: - For "Youm7" we generate more examples (1172136) than the ones reported by the paper (1025027) | | Number of examples | Number of examples according to the source | |:---------------|-------------------:|-----:| | Alittihad | 349342 |349342 | | Almasryalyoum | 291723 |291723 | | Almustaqbal | 446873 |446873 | | Alqabas | 817274 |817274 | | Echoroukonline | 139732 |139732 | | Ryiadh | 858188 | 858188 | | Sabanews | 92149 |92149 | | SaudiYoum | 888068 |888068 | | Techreen | 314597 |314597 | | Youm7 | 1172136 |1025027 | Fix #3126.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3136/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3136/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3136", "html_url": "https://github.com/huggingface/datasets/pull/3136", "diff_url": "https://github.com/huggingface/datasets/pull/3136.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3136.patch", "merged_at": "2021-10-22T13:28:39" }
true
https://api.github.com/repos/huggingface/datasets/issues/3135
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3135/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3135/comments
https://api.github.com/repos/huggingface/datasets/issues/3135/events
https://github.com/huggingface/datasets/issues/3135
1,033,294,299
I_kwDODunzps49ltHb
3,135
Make inspect.get_dataset_config_names always return a non-empty list of configs
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @severo, I guess this issue requests not only to be able to access the configuration name (by using `inspect.get_dataset_config_names`), but the configuration itself as well (I mean you use the name to get the configuration afterwards, maybe using `builder_cls.builder_configs`), is this right?", "Yes, maybe the issue could be reformulated. As a user, I want to avoid having to manage special cases:\r\n- I want to be able to get the names of a dataset's configs, and use them in the rest of the API (get the data, get the split names, etc).\r\n- I don't want to have to manage datasets with named configs (`glue`) differently from datasets without named configs (`acronym_identification`, `Check/region_1`)" ]
"2021-10-22T08:02:50"
"2021-10-28T05:44:49"
"2021-10-28T05:44:49"
CONTRIBUTOR
null
**Is your feature request related to a problem? Please describe.** Currently, some datasets have a configuration, while others don't. It would be simpler for the user to always have configuration names to refer to **Describe the solution you'd like** In that sense inspect.get_dataset_config_names should always return at least one configuration name, be it `default` or `Check___region_1` (for community datasets like `Check/region_1`). https://github.com/huggingface/datasets/blob/c5747a5e1dde2670b7f2ca6e79e2ffd99dff85af/src/datasets/inspect.py#L161
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3135/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3135/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3134
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3134/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3134/comments
https://api.github.com/repos/huggingface/datasets/issues/3134/events
https://github.com/huggingface/datasets/issues/3134
1,033,251,755
I_kwDODunzps49liur
3,134
Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metrics/rouge/rouge.py
{ "login": "yananchen1989", "id": 26405281, "node_id": "MDQ6VXNlcjI2NDA1Mjgx", "avatar_url": "https://avatars.githubusercontent.com/u/26405281?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yananchen1989", "html_url": "https://github.com/yananchen1989", "followers_url": "https://api.github.com/users/yananchen1989/followers", "following_url": "https://api.github.com/users/yananchen1989/following{/other_user}", "gists_url": "https://api.github.com/users/yananchen1989/gists{/gist_id}", "starred_url": "https://api.github.com/users/yananchen1989/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yananchen1989/subscriptions", "organizations_url": "https://api.github.com/users/yananchen1989/orgs", "repos_url": "https://api.github.com/users/yananchen1989/repos", "events_url": "https://api.github.com/users/yananchen1989/events{/privacy}", "received_events_url": "https://api.github.com/users/yananchen1989/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi,\r\n\r\nDid you try to run the code multiple times (GitHub URLs can be down sometimes for various reasons)? I can access `https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metrics/rouge/rouge.py`, so this code is working without an error on my side. \r\n\r\nAdditionally, can you please run the `datasets-cli env` command because it seems to me that you are using the `datasets` version different from `1.12.1`?", "Same issue when running `metric = datasets.load_metric(\"accuracy\")`.\r\nError info is:\r\n```\r\nmetric = datasets.load_metric(\"accuracy\")\r\nTraceback (most recent call last):\r\n\r\n File \"<ipython-input-2-d25db38b26c5>\", line 1, in <module>\r\n metric = datasets.load_metric(\"accuracy\")\r\n\r\n File \"D:\\anaconda3\\lib\\site-packages\\datasets\\load.py\", line 610, in load_metric\r\n module_path, _ = prepare_module(\r\n\r\n File \"D:\\anaconda3\\lib\\site-packages\\datasets\\load.py\", line 330, in prepare_module\r\n local_path = cached_path(file_path, download_config=download_config)\r\n\r\n File \"D:\\anaconda3\\lib\\site-packages\\datasets\\utils\\file_utils.py\", line 288, in cached_path\r\n output_path = get_from_cache(\r\n\r\n File \"D:\\anaconda3\\lib\\site-packages\\datasets\\utils\\file_utils.py\", line 605, in get_from_cache\r\n raise ConnectionError(\"Couldn't reach {}\".format(url))\r\nConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metrics/accuracy/accuracy.py\r\n```\r\n\r\n\r\n My `datasets-cli env` result is as follows:\r\n- `datasets` version: 1.11.0\r\n- Platform: Windows-10-10.0.19041-SP0\r\n- Python version: 3.8.8\r\n- PyArrow version: 6.0.0\r\n\r\n@yananchen1989 did you find a way to solve this?", "It seems to be able to solve this issue by adding the equivalent `accuracy.py` locally. \r\nchange `metric = datasets.load_metric(\"accuracy\")` to `metric = datasets.load_metric(path = \"./accuracy.py\")`.\r\nCopy `accuracy.py` from browser at [accuracy.py](https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metrics/accuracy/accuracy.py)" ]
"2021-10-22T07:07:52"
"2022-01-19T14:02:32"
"2022-01-19T14:02:31"
NONE
null
datasets version: 1.12.1 `metric = datasets.load_metric('rouge')` The error: > ConnectionError Traceback (most recent call last) > <ipython-input-3-dd10a0c5212f> in <module> > ----> 1 metric = datasets.load_metric('rouge') > > /usr/local/lib/python3.6/dist-packages/datasets/load.py in load_metric(path, config_name, process_id, num_process, cache_dir, experiment_id, keep_in_memory, download_config, download_mode, script_version, **metric_init_kwargs) > 613 download_config=download_config, > 614 download_mode=download_mode, > --> 615 dataset=False, > 616 ) > 617 metric_cls = import_main_class(module_path, dataset=False) > > /usr/local/lib/python3.6/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, dynamic_modules_path, return_resolved_file_path, **download_kwargs) > 328 file_path = hf_github_url(path=path, name=name, dataset=dataset, version=script_version) > 329 try: > --> 330 local_path = cached_path(file_path, download_config=download_config) > 331 except FileNotFoundError: > 332 if script_version is not None: > > /usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) > 296 use_etag=download_config.use_etag, > 297 max_retries=download_config.max_retries, > --> 298 use_auth_token=download_config.use_auth_token, > 299 ) > 300 elif os.path.exists(url_or_filename): > > /usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token) > 603 raise FileNotFoundError("Couldn't find file at {}".format(url)) > 604 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}") > --> 605 raise ConnectionError("Couldn't reach {}".format(url)) > 606 > 607 # Try a second time > > ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metrics/rouge/rouge.py Is there any remedy to solve the connection issue ?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3134/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3134/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3133
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3133/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3133/comments
https://api.github.com/repos/huggingface/datasets/issues/3133/events
https://github.com/huggingface/datasets/pull/3133
1,032,511,710
PR_kwDODunzps4tftyZ
3,133
Support Audio feature in streaming mode
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-10-21T13:37:57"
"2021-11-12T14:13:05"
"2021-11-12T14:13:04"
MEMBER
null
Fix #3132.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3133/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3133/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3133", "html_url": "https://github.com/huggingface/datasets/pull/3133", "diff_url": "https://github.com/huggingface/datasets/pull/3133.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3133.patch", "merged_at": "2021-11-12T14:13:04" }
true
https://api.github.com/repos/huggingface/datasets/issues/3132
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3132/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3132/comments
https://api.github.com/repos/huggingface/datasets/issues/3132/events
https://github.com/huggingface/datasets/issues/3132
1,032,505,430
I_kwDODunzps49ishW
3,132
Support Audio feature in streaming mode
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
"2021-10-21T13:32:18"
"2021-11-12T14:13:04"
"2021-11-12T14:13:04"
MEMBER
null
Currently, Audio feature is only supported for non-streaming datasets. Due to the large size of many speech datasets, we should also support Audio feature in streaming mode.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3132/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3132/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3131
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3131/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3131/comments
https://api.github.com/repos/huggingface/datasets/issues/3131/events
https://github.com/huggingface/datasets/issues/3131
1,032,309,865
I_kwDODunzps49h8xp
3,131
Add ADE20k
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 3608941089, "node_id": "LA_kwDODunzps7XHBIh", "url": "https://api.github.com/repos/huggingface/datasets/labels/vision", "name": "vision", "color": "bfdadc", "default": false, "description": "Vision datasets" } ]
closed
false
null
[]
null
[ "I think we can close this issue since PR [#3607](https://github.com/huggingface/datasets/pull/3607) solves this." ]
"2021-10-21T10:13:09"
"2023-01-27T14:40:20"
"2023-01-27T14:40:20"
CONTRIBUTOR
null
## Adding a Dataset - **Name:** ADE20k (actually it's called the MIT Scene Parsing Benchmark, it's actually a subset of ADE20k but a lot of authors still call it ADE20k) - **Description:** A semantic segmentation dataset, consisting of 150 classes. - **Paper:** http://people.csail.mit.edu/bzhou/publication/scene-parse-camera-ready.pdf - **Data:** http://sceneparsing.csail.mit.edu/ - **Motivation:** I am currently adding Transformer-based semantic segmentation models that achieve SOTA on this dataset. It would be great to directly access this dataset using HuggingFace Datasets, in order to make example scripts in HuggingFace Transformers. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3131/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3131/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3130
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3130/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3130/comments
https://api.github.com/repos/huggingface/datasets/issues/3130/events
https://github.com/huggingface/datasets/pull/3130
1,032,299,417
PR_kwDODunzps4tfBJU
3,130
Create SECURITY.md
{ "login": "zidingz", "id": 28839565, "node_id": "MDQ6VXNlcjI4ODM5NTY1", "avatar_url": "https://avatars.githubusercontent.com/u/28839565?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zidingz", "html_url": "https://github.com/zidingz", "followers_url": "https://api.github.com/users/zidingz/followers", "following_url": "https://api.github.com/users/zidingz/following{/other_user}", "gists_url": "https://api.github.com/users/zidingz/gists{/gist_id}", "starred_url": "https://api.github.com/users/zidingz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zidingz/subscriptions", "organizations_url": "https://api.github.com/users/zidingz/orgs", "repos_url": "https://api.github.com/users/zidingz/repos", "events_url": "https://api.github.com/users/zidingz/events{/privacy}", "received_events_url": "https://api.github.com/users/zidingz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-10-21T10:03:03"
"2021-10-21T14:33:28"
"2021-10-21T14:31:50"
NONE
null
To let the repository confirm feedback@huggingface.co as its security contact.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3130/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3130/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3130", "html_url": "https://github.com/huggingface/datasets/pull/3130", "diff_url": "https://github.com/huggingface/datasets/pull/3130.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3130.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3129
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3129/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3129/comments
https://api.github.com/repos/huggingface/datasets/issues/3129/events
https://github.com/huggingface/datasets/pull/3129
1,032,234,167
PR_kwDODunzps4tezlA
3,129
Support Audio feature for TAR archives in sequential access
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-10-21T08:56:51"
"2021-11-17T17:42:08"
"2021-11-17T17:42:07"
MEMBER
null
Add Audio feature support for TAR archived files in sequential access. Fix #3128.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3129/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3129/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3129", "html_url": "https://github.com/huggingface/datasets/pull/3129", "diff_url": "https://github.com/huggingface/datasets/pull/3129.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3129.patch", "merged_at": "2021-11-17T17:42:07" }
true
https://api.github.com/repos/huggingface/datasets/issues/3128
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3128/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3128/comments
https://api.github.com/repos/huggingface/datasets/issues/3128/events
https://github.com/huggingface/datasets/issues/3128
1,032,201,870
I_kwDODunzps49hiaO
3,128
Support Audio feature for TAR archives in sequential access
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
"2021-10-21T08:23:01"
"2021-11-17T17:42:07"
"2021-11-17T17:42:07"
MEMBER
null
Currently, Audio feature accesses each audio file by their file path. However, streamed TAR archive files do not allow random access to their archived files. Therefore, we should enhance the Audio feature to support TAR archived files in sequential access.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3128/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3128/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3126
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3126/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3126/comments
https://api.github.com/repos/huggingface/datasets/issues/3126/events
https://github.com/huggingface/datasets/issues/3126
1,032,093,055
I_kwDODunzps49hH1_
3,126
"arabic_billion_words" dataset does not create the full dataset
{ "login": "vitalyshalumov", "id": 33824221, "node_id": "MDQ6VXNlcjMzODI0MjIx", "avatar_url": "https://avatars.githubusercontent.com/u/33824221?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vitalyshalumov", "html_url": "https://github.com/vitalyshalumov", "followers_url": "https://api.github.com/users/vitalyshalumov/followers", "following_url": "https://api.github.com/users/vitalyshalumov/following{/other_user}", "gists_url": "https://api.github.com/users/vitalyshalumov/gists{/gist_id}", "starred_url": "https://api.github.com/users/vitalyshalumov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vitalyshalumov/subscriptions", "organizations_url": "https://api.github.com/users/vitalyshalumov/orgs", "repos_url": "https://api.github.com/users/vitalyshalumov/repos", "events_url": "https://api.github.com/users/vitalyshalumov/events{/privacy}", "received_events_url": "https://api.github.com/users/vitalyshalumov/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting, @vitalyshalumov.\r\n\r\nApparently the script to parse the data has a bug, and does not generate the entire dataset.\r\n\r\nI'm fixing it." ]
"2021-10-21T06:02:38"
"2021-10-22T13:28:40"
"2021-10-22T13:28:40"
NONE
null
## Describe the bug When running: raw_dataset = load_dataset('arabic_billion_words','Alittihad') the correct dataset file is pulled from the url. But, the generated dataset includes just a small portion of the data included in the file. This is true for all other portions of the "arabic_billion_words" dataset ('Almasryalyoum',.....) ## Steps to reproduce the bug ```python # Sample code to reproduce the bug raw_dataset = load_dataset('arabic_billion_words','Alittihad') #The screen message Downloading and preparing dataset arabic_billion_words/Alittihad (download: 332.13 MiB, generated: 20.62 MiB, post-processed: Unknown size, total: 352.74 MiB) ## Expected results over 100K sentences ## Actual results only 11K sentences ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.14.0 - Platform: Linux-5.8.0-63-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 4.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3126/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3126/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3125
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3125/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3125/comments
https://api.github.com/repos/huggingface/datasets/issues/3125/events
https://github.com/huggingface/datasets/pull/3125
1,032,046,666
PR_kwDODunzps4teNPC
3,125
Add SLR83 to OpenSLR
{ "login": "tyrius02", "id": 4561309, "node_id": "MDQ6VXNlcjQ1NjEzMDk=", "avatar_url": "https://avatars.githubusercontent.com/u/4561309?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tyrius02", "html_url": "https://github.com/tyrius02", "followers_url": "https://api.github.com/users/tyrius02/followers", "following_url": "https://api.github.com/users/tyrius02/following{/other_user}", "gists_url": "https://api.github.com/users/tyrius02/gists{/gist_id}", "starred_url": "https://api.github.com/users/tyrius02/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tyrius02/subscriptions", "organizations_url": "https://api.github.com/users/tyrius02/orgs", "repos_url": "https://api.github.com/users/tyrius02/repos", "events_url": "https://api.github.com/users/tyrius02/events{/privacy}", "received_events_url": "https://api.github.com/users/tyrius02/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-10-21T04:26:00"
"2021-10-22T20:10:05"
"2021-10-22T08:30:22"
CONTRIBUTOR
null
The PR resolves #3119, adding SLR83 (UK and Ireland dialects) to the previously created OpenSLR dataset.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3125/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3125/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3125", "html_url": "https://github.com/huggingface/datasets/pull/3125", "diff_url": "https://github.com/huggingface/datasets/pull/3125.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3125.patch", "merged_at": "2021-10-22T08:30:22" }
true
https://api.github.com/repos/huggingface/datasets/issues/3124
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3124/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3124/comments
https://api.github.com/repos/huggingface/datasets/issues/3124/events
https://github.com/huggingface/datasets/pull/3124
1,031,976,286
PR_kwDODunzps4td-5w
3,124
More efficient nested features encoding
{ "login": "eladsegal", "id": 13485709, "node_id": "MDQ6VXNlcjEzNDg1NzA5", "avatar_url": "https://avatars.githubusercontent.com/u/13485709?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eladsegal", "html_url": "https://github.com/eladsegal", "followers_url": "https://api.github.com/users/eladsegal/followers", "following_url": "https://api.github.com/users/eladsegal/following{/other_user}", "gists_url": "https://api.github.com/users/eladsegal/gists{/gist_id}", "starred_url": "https://api.github.com/users/eladsegal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eladsegal/subscriptions", "organizations_url": "https://api.github.com/users/eladsegal/orgs", "repos_url": "https://api.github.com/users/eladsegal/repos", "events_url": "https://api.github.com/users/eladsegal/events{/privacy}", "received_events_url": "https://api.github.com/users/eladsegal/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-10-21T01:55:31"
"2021-11-02T15:07:13"
"2021-11-02T11:04:04"
CONTRIBUTOR
null
Nested encoding of features wastes a lot of time on operations which are effectively doing nothing when lists are used. For example, if in the input we have a list of integers, `encoded_nested_example` will iterate over it and apply `encoded_nested_example` on every element even though it just return the int as is. A similar issue is handled at an earlier stage when casting pytorch/tensorflow/pandas objects to python lists/numpy arrays: https://github.com/huggingface/datasets/blob/c98c23c4260edadab00f997d1a5d66b7f2e93ce9/src/datasets/features/features.py#L149-L156 https://github.com/huggingface/datasets/blob/c98c23c4260edadab00f997d1a5d66b7f2e93ce9/src/datasets/features/features.py#L212-L228 In this pull request I suggest to use the same approach in `encoded_nested_example`. In my setup there was a major speedup with this change: loading the data was at least x4 faster.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3124/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3124/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3124", "html_url": "https://github.com/huggingface/datasets/pull/3124", "diff_url": "https://github.com/huggingface/datasets/pull/3124.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3124.patch", "merged_at": "2021-11-02T11:04:04" }
true
https://api.github.com/repos/huggingface/datasets/issues/3123
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3123/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3123/comments
https://api.github.com/repos/huggingface/datasets/issues/3123/events
https://github.com/huggingface/datasets/issues/3123
1,031,793,207
I_kwDODunzps49f-o3
3,123
Segmentation fault when loading datasets from file
{ "login": "TevenLeScao", "id": 26709476, "node_id": "MDQ6VXNlcjI2NzA5NDc2", "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TevenLeScao", "html_url": "https://github.com/TevenLeScao", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi ! I created an issue on Arrow's JIRA after making a minimum reproducible example\r\n\r\nhttps://issues.apache.org/jira/browse/ARROW-14439\r\n\r\n```python\r\nimport io\r\n\r\nimport pyarrow.json as paj\r\n\r\nbatch = b'{\"a\": [], \"b\": 1}\\n{\"b\": 1}'\r\nblock_size = 12\r\n\r\npaj.read_json(\r\n io.BytesIO(batch), read_options=paj.ReadOptions(block_size=block_size)\r\n)\r\n```\r\n\r\nI don't see a way to workaround this properly now without hurting the performance of the JSON loader significantly though", "The issue has been fixed in pyarrow 6.0.0, please update pyarrow :)\r\n\r\nThe issue was due to missing fields in the JSON data of type list. Now it's working fine and missing list fields are replaced with empty lists" ]
"2021-10-20T20:16:11"
"2021-11-02T14:57:07"
"2021-11-02T14:57:07"
MEMBER
null
## Describe the bug Custom dataset loading sometimes segfaults and kills the process if chunks contain a variety of features/ ## Steps to reproduce the bug Download an example file: ``` wget https://gist.githubusercontent.com/TevenLeScao/11e2184394b3fa47d693de2550942c6b/raw/4232704d08fbfcaf93e5b51def9e5051507651ad/tiny_kelm.jsonl ``` Then in Python: ``` import datasets tiny_kelm = datasets.load_dataset("json", data_files="tiny_kelm.jsonl", chunksize=100000) ``` ## Expected results a `tiny_kelm` functional dataset ## Actual results ☠️ `Segmentation fault (core dumped)` ☠️ ## Environment info - `datasets` version: 1.14.0 - Platform: Linux-5.11.0-38-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3123/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3123/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3122
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3122/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3122/comments
https://api.github.com/repos/huggingface/datasets/issues/3122/events
https://github.com/huggingface/datasets/issues/3122
1,031,787,509
I_kwDODunzps49f9P1
3,122
OSError with a custom dataset loading script
{ "login": "suzanab", "id": 38602977, "node_id": "MDQ6VXNlcjM4NjAyOTc3", "avatar_url": "https://avatars.githubusercontent.com/u/38602977?v=4", "gravatar_id": "", "url": "https://api.github.com/users/suzanab", "html_url": "https://github.com/suzanab", "followers_url": "https://api.github.com/users/suzanab/followers", "following_url": "https://api.github.com/users/suzanab/following{/other_user}", "gists_url": "https://api.github.com/users/suzanab/gists{/gist_id}", "starred_url": "https://api.github.com/users/suzanab/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/suzanab/subscriptions", "organizations_url": "https://api.github.com/users/suzanab/orgs", "repos_url": "https://api.github.com/users/suzanab/repos", "events_url": "https://api.github.com/users/suzanab/events{/privacy}", "received_events_url": "https://api.github.com/users/suzanab/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi,\r\n\r\nthere is a difference in how the `data_dir` is zipped between the `classla/janes_tag` and the `classla/reldi_hr` dataset. After unzipping, for the former, the data files (`*.conllup`) are in the root directory (root -> data files), and for the latter, they are inside the `data` directory (root -> `data` -> data files).\r\n\r\nThis can be fixed by removing the `os.path.join` call in https://huggingface.co/datasets/classla/janes_tag/blob/main/janes_tag.py#L86\r\n\r\nLet me know if this works for you.", "Hi Mario,\r\n\r\nI had already tried that before, but it didn't work. I have now recreated the `classla/janes_tag` zip file so that it also contains the `data` directory, but I am still getting the same error.", "Hi,\r\n\r\nI just tried to download the `classla/janes_tag` dataset, and this time the zip file is extracted correctly. However, the script is now throwing the IndexError, probably due to a bug in the `_generate_examples`.\r\n\r\nLet me know if you are still getting the same error.", "I am still getting the same error.", "Hi, \r\n\r\ncould you try to download the dataset with a different `cache_dir` like so:\r\n```python\r\nimport datasets\r\ndataset = datasets.load_dataset('classla/janes_tag', split='validation', cache_dir=\"path/to/different/cache/dir\")\r\n```\r\nIf this works, then most likely the cached extracted data is causing issues. This data is stored at `~/.cache/huggingface/datasets/downloads/extracted` and needs to be deleted, and then it should work (you can easily locate the directory with the path given in the `OSError` message). Additionally, I'd suggest you to update `datasets` to the newest version with:\r\n```\r\npip install -U datasets\r\n```", "Thank you, deleting the `~/.cache/huggingface/datasets/downloads/extracted` directory helped. However, I am still having problems.\r\n\r\nThere was indeed a bug in the script that was throwing an `IndexError`, which I have now corrected (added the condition to skip the lines starting with '# text') and it is working locally, but still throws an error when I try to load the dataset from HuggingFace. I literally copied and pasted the `_generate_examples` function and ran it on the `dev_all.conllup` file, which I even re-downloaded from the repository to be certain that the files are exactly the same. I also deleted everything again just in case, but it didn't help. The code works locally, but throws an `IndexError` when loading from `datasets.`", "Hi,\r\n\r\nDid some investigation.\r\n\r\nTo fix the dataset script on the Hub, append the following labels to the `names` list of the `upos_tags` field:\r\n```'INTJ NOUN', 'AUX PRON', 'PART ADV', 'PRON ADP', 'INTJ INTJ', 'VERB NOUN', 'NOUN AUX'```.\r\n\r\nThis step is required to avoid an error due to missing labels in the following step which is:\r\n```python\r\nload_dataset(\"classla/janes_tag\", split=\"validation\", download_mode=\"force_redownload\")\r\n```\r\nThis will generate and cache the dataset, so specifying `download_mode` will not be required anymore unless you update the script/data on the Hub.", "It works now, thank you!" ]
"2021-10-20T20:08:39"
"2021-11-23T09:55:38"
"2021-11-23T09:55:38"
NONE
null
## Describe the bug I am getting an OS error when trying to load the newly uploaded dataset classla/janes_tag. What puzzles me is that I have already uploaded a very similar dataset - classla/reldi_hr - with no issues. The loading scripts for the two datasets are almost identical and they have the same directory structure, yet I am only getting an error with janes_tag. ## Steps to reproduce the bug ```python dataset = datasets.load_dataset('classla/janes_tag', split='validation') ``` ## Expected results Dataset correctly loaded. ## Actual results Traceback (most recent call last): File "C:/mypath/test.py", line 91, in <module> load_and_print('janes_tag') File "C:/mypath/test.py", line 32, in load_and_print dataset = datasets.load_dataset('classla/{}'.format(ds_name), split='validation') File "C:\mypath\venv\lib\site-packages\datasets\load.py", line 1632, in load_dataset use_auth_token=use_auth_token, File "C:\mypath\venv\lib\site-packages\datasets\builder.py", line 608, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "C:\mypath\venv\lib\site-packages\datasets\builder.py", line 704, in _download_and_prepare ) from None OSError: Cannot find data file. Original error: [Errno 2] No such file or directory: 'C:\\mypath\\.cache\\huggingface\\datasets\\downloads\\2c9996e44bdc5af9c89bffb9e6d7a3e42fdb2f56bacab45de13b20f3032ea7ca\\data\\train_all.conllup' ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.14.0 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.7.5 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3122/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3122/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3121
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3121/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3121/comments
https://api.github.com/repos/huggingface/datasets/issues/3121/events
https://github.com/huggingface/datasets/pull/3121
1,031,673,115
PR_kwDODunzps4tc_6q
3,121
Use huggingface_hub.HfApi to list datasets/metrics
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-10-20T17:48:29"
"2021-11-05T11:45:08"
"2021-11-05T09:48:36"
CONTRIBUTOR
null
Delete `datasets.inspect.HfApi` and use `huggingface_hub.HfApi` instead. WIP until https://github.com/huggingface/huggingface_hub/pull/429 is merged, then wait for the new release of `huggingface_hub`, update the `huggingface_hub` version in `setup.py` and merge this PR. cc: @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3121/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3121/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3121", "html_url": "https://github.com/huggingface/datasets/pull/3121", "diff_url": "https://github.com/huggingface/datasets/pull/3121.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3121.patch", "merged_at": "2021-11-05T09:48:35" }
true
https://api.github.com/repos/huggingface/datasets/issues/3120
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3120/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3120/comments
https://api.github.com/repos/huggingface/datasets/issues/3120/events
https://github.com/huggingface/datasets/pull/3120
1,031,574,511
PR_kwDODunzps4tcril
3,120
Correctly update metadata to preserve features when concatenating datasets with axis=1
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-10-20T15:54:58"
"2021-10-22T08:28:51"
"2021-10-21T14:50:21"
CONTRIBUTOR
null
This PR correctly updates metadata to preserve higher-level feature types (e.g. `ClassLabel`) in `datasets.concatenate_datasets` when `axis=1`. Previously, we would delete the feature metadata in `datasets.concatenate_datasets` if `axis=1` and restore the feature types from the arrow table schema in `Dataset.__init__`. However, this approach only works for simple feature types (e.g. `Value`). Fixes #3111
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3120/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3120/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3120", "html_url": "https://github.com/huggingface/datasets/pull/3120", "diff_url": "https://github.com/huggingface/datasets/pull/3120.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3120.patch", "merged_at": "2021-10-21T14:50:21" }
true
https://api.github.com/repos/huggingface/datasets/issues/3119
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3119/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3119/comments
https://api.github.com/repos/huggingface/datasets/issues/3119/events
https://github.com/huggingface/datasets/issues/3119
1,031,328,044
I_kwDODunzps49eNEs
3,119
Add OpenSLR 83 - Crowdsourced high-quality UK and Ireland English Dialect speech
{ "login": "tyrius02", "id": 4561309, "node_id": "MDQ6VXNlcjQ1NjEzMDk=", "avatar_url": "https://avatars.githubusercontent.com/u/4561309?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tyrius02", "html_url": "https://github.com/tyrius02", "followers_url": "https://api.github.com/users/tyrius02/followers", "following_url": "https://api.github.com/users/tyrius02/following{/other_user}", "gists_url": "https://api.github.com/users/tyrius02/gists{/gist_id}", "starred_url": "https://api.github.com/users/tyrius02/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tyrius02/subscriptions", "organizations_url": "https://api.github.com/users/tyrius02/orgs", "repos_url": "https://api.github.com/users/tyrius02/repos", "events_url": "https://api.github.com/users/tyrius02/events{/privacy}", "received_events_url": "https://api.github.com/users/tyrius02/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
{ "login": "tyrius02", "id": 4561309, "node_id": "MDQ6VXNlcjQ1NjEzMDk=", "avatar_url": "https://avatars.githubusercontent.com/u/4561309?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tyrius02", "html_url": "https://github.com/tyrius02", "followers_url": "https://api.github.com/users/tyrius02/followers", "following_url": "https://api.github.com/users/tyrius02/following{/other_user}", "gists_url": "https://api.github.com/users/tyrius02/gists{/gist_id}", "starred_url": "https://api.github.com/users/tyrius02/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tyrius02/subscriptions", "organizations_url": "https://api.github.com/users/tyrius02/orgs", "repos_url": "https://api.github.com/users/tyrius02/repos", "events_url": "https://api.github.com/users/tyrius02/events{/privacy}", "received_events_url": "https://api.github.com/users/tyrius02/received_events", "type": "User", "site_admin": false }
[ { "login": "tyrius02", "id": 4561309, "node_id": "MDQ6VXNlcjQ1NjEzMDk=", "avatar_url": "https://avatars.githubusercontent.com/u/4561309?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tyrius02", "html_url": "https://github.com/tyrius02", "followers_url": "https://api.github.com/users/tyrius02/followers", "following_url": "https://api.github.com/users/tyrius02/following{/other_user}", "gists_url": "https://api.github.com/users/tyrius02/gists{/gist_id}", "starred_url": "https://api.github.com/users/tyrius02/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tyrius02/subscriptions", "organizations_url": "https://api.github.com/users/tyrius02/orgs", "repos_url": "https://api.github.com/users/tyrius02/repos", "events_url": "https://api.github.com/users/tyrius02/events{/privacy}", "received_events_url": "https://api.github.com/users/tyrius02/received_events", "type": "User", "site_admin": false } ]
null
[ "Ugh. The index files for SLR83 are CSV, not TSV. I need to add logic to process these index files." ]
"2021-10-20T12:05:07"
"2021-10-22T19:00:52"
"2021-10-22T08:30:22"
CONTRIBUTOR
null
## Adding a Dataset - **Name:** *openslr** - **Description:** *Data set which contains male and female recordings of English from various dialects of the UK and Ireland.* - **Paper:** *https://www.openslr.org/resources/83/about.html* - **Data:** *Eleven separate data files can be found via https://www.openslr.org/resources/83/* - **Motivation:** *Increase english ASR data with UK and Irish dialects* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). The *openslr* dataset already exists, this will add additional subset, *SLR83*.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3119/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3119/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3118
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3118/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3118/comments
https://api.github.com/repos/huggingface/datasets/issues/3118/events
https://github.com/huggingface/datasets/pull/3118
1,031,309,549
PR_kwDODunzps4tb0LY
3,118
Fix CI error at each release commit
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-10-20T11:44:38"
"2021-10-20T13:02:36"
"2021-10-20T13:02:36"
MEMBER
null
Fix test_load_dataset_canonical at release commit. Fix #3117.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3118/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3118/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3118", "html_url": "https://github.com/huggingface/datasets/pull/3118", "diff_url": "https://github.com/huggingface/datasets/pull/3118.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3118.patch", "merged_at": "2021-10-20T13:02:35" }
true
https://api.github.com/repos/huggingface/datasets/issues/3117
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3117/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3117/comments
https://api.github.com/repos/huggingface/datasets/issues/3117/events
https://github.com/huggingface/datasets/issues/3117
1,031,308,083
I_kwDODunzps49eIMz
3,117
CI error at each release commit
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
"2021-10-20T11:42:53"
"2021-10-20T13:02:35"
"2021-10-20T13:02:35"
MEMBER
null
After 1.12.0, there is a recurrent CI error at each release commit: https://app.circleci.com/pipelines/github/huggingface/datasets/8289/workflows/665d954d-e409-4602-8202-e678594d2946/jobs/51110 ``` ____________________ LoadTest.test_load_dataset_canonical _____________________ [gw0] win32 -- Python 3.6.8 C:\tools\miniconda3\python.exe self = <tests.test_load.LoadTest testMethod=test_load_dataset_canonical> def test_load_dataset_canonical(self): scripts_version = os.getenv("HF_SCRIPTS_VERSION", SCRIPTS_VERSION) with self.assertRaises(FileNotFoundError) as context: datasets.load_dataset("_dummy") self.assertIn( f"https://raw.githubusercontent.com/huggingface/datasets/{scripts_version}/datasets/_dummy/_dummy.py", > str(context.exception), ) E AssertionError: 'https://raw.githubusercontent.com/huggingface/datasets/1.14.0/datasets/_dummy/_dummy.py' not found in "Couldn't find a dataset script at C:\\Users\\circleci\\datasets\\_dummy\\_dummy.py or any data file in the same directory. Couldn't find '_dummy' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/_dummy/_dummy.py" tests\test_load.py:358: AssertionError ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3117/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3117/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3116
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3116/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3116/comments
https://api.github.com/repos/huggingface/datasets/issues/3116/events
https://github.com/huggingface/datasets/pull/3116
1,031,270,611
PR_kwDODunzps4tbr6g
3,116
Update doc links to point to new docs
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
[]
"2021-10-20T11:00:47"
"2021-10-22T08:29:28"
"2021-10-22T08:26:45"
CONTRIBUTOR
null
This PR: * updates the README links and the ADD_NEW_DATASET template to point to the new docs (the new docs don't have a section with the list of all the possible features, so I added that info to the `Features` docstring, which is then referenced in the ADD_NEW_DATASET template) * fixes some broken links in the `.rst` files (fixed with the `make linkcheck` tool)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3116/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3116/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3116", "html_url": "https://github.com/huggingface/datasets/pull/3116", "diff_url": "https://github.com/huggingface/datasets/pull/3116.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3116.patch", "merged_at": "2021-10-22T08:26:45" }
true
https://api.github.com/repos/huggingface/datasets/issues/3115
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3115/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3115/comments
https://api.github.com/repos/huggingface/datasets/issues/3115/events
https://github.com/huggingface/datasets/pull/3115
1,030,737,524
PR_kwDODunzps4tZ-Vr
3,115
Fill in dataset card for NCBI disease dataset
{ "login": "edugp", "id": 17855740, "node_id": "MDQ6VXNlcjE3ODU1NzQw", "avatar_url": "https://avatars.githubusercontent.com/u/17855740?v=4", "gravatar_id": "", "url": "https://api.github.com/users/edugp", "html_url": "https://github.com/edugp", "followers_url": "https://api.github.com/users/edugp/followers", "following_url": "https://api.github.com/users/edugp/following{/other_user}", "gists_url": "https://api.github.com/users/edugp/gists{/gist_id}", "starred_url": "https://api.github.com/users/edugp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/edugp/subscriptions", "organizations_url": "https://api.github.com/users/edugp/orgs", "repos_url": "https://api.github.com/users/edugp/repos", "events_url": "https://api.github.com/users/edugp/events{/privacy}", "received_events_url": "https://api.github.com/users/edugp/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-10-19T20:57:05"
"2021-10-22T08:25:07"
"2021-10-22T08:25:07"
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3115/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3115/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3115", "html_url": "https://github.com/huggingface/datasets/pull/3115", "diff_url": "https://github.com/huggingface/datasets/pull/3115.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3115.patch", "merged_at": "2021-10-22T08:25:07" }
true
https://api.github.com/repos/huggingface/datasets/issues/3114
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3114/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3114/comments
https://api.github.com/repos/huggingface/datasets/issues/3114/events
https://github.com/huggingface/datasets/issues/3114
1,030,693,130
I_kwDODunzps49byEK
3,114
load_from_disk in DatasetsDict/Dataset not working with PyArrowHDFS wrapper implementing fsspec.spec.AbstractFileSystem
{ "login": "francisco-perez-sorrosal", "id": 918006, "node_id": "MDQ6VXNlcjkxODAwNg==", "avatar_url": "https://avatars.githubusercontent.com/u/918006?v=4", "gravatar_id": "", "url": "https://api.github.com/users/francisco-perez-sorrosal", "html_url": "https://github.com/francisco-perez-sorrosal", "followers_url": "https://api.github.com/users/francisco-perez-sorrosal/followers", "following_url": "https://api.github.com/users/francisco-perez-sorrosal/following{/other_user}", "gists_url": "https://api.github.com/users/francisco-perez-sorrosal/gists{/gist_id}", "starred_url": "https://api.github.com/users/francisco-perez-sorrosal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/francisco-perez-sorrosal/subscriptions", "organizations_url": "https://api.github.com/users/francisco-perez-sorrosal/orgs", "repos_url": "https://api.github.com/users/francisco-perez-sorrosal/repos", "events_url": "https://api.github.com/users/francisco-perez-sorrosal/events{/privacy}", "received_events_url": "https://api.github.com/users/francisco-perez-sorrosal/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi ! Can you try again with pyarrow 6.0.0 ? I think it includes some changes regarding filesystems compatibility with fsspec.", "Hi @lhoestq! I ended up using `fsspec.implementations.arrow.HadoopFileSystem` which doesn't have the problem I described with pyarrow 5.0.0.\r\n\r\nI'll try again with `PyArrowHDFS` once I update arrow to 6.0.0.\r\n\r\nThanks!" ]
"2021-10-19T20:01:45"
"2022-02-14T14:00:28"
"2022-02-14T14:00:28"
CONTRIBUTOR
null
## Describe the bug Passing a PyArrowHDFS implementation of fsspec.spec.AbstractFileSystem (in the `fs` param required by `load_from_disk` methods in `DatasetDict` (in datasets_dict.py) and `Dataset` (in arrow_dataset.py) results in an error when calling the download method in the `fs` parameter. ## Steps to reproduce the bug The documentation for the `fs` parameter states: ``` fs (:class:`~filesystems.S3FileSystem` or ``fsspec.spec.AbstractFileSystem``, optional, default ``None``): Instance of the remote filesystem used to download the files from. ``` `PyArrowHDFS` from [fsspec](https://filesystem-spec.readthedocs.io/en/latest/_modules/fsspec/implementations/hdfs.html) implements `fsspec.spec.AbstractFileSystem`. However, when using it as shown below, I get an error. ```python from fsspec.implementations.hdfs import PyArrowHDFS ... transformed_corpus_path = "/user/my_user/clickbait/transformed_ds/" fs = PyArrowHDFS(host, port, user, kerb_ticket=kerb_ticket) dss = DatasetDict.load_from_disk(transformed_corpus_path, fs, True) ``` ## Expected results Previous to load from disk, I have managed to successfully store in HDFS the data and meta-information of a DatasetDict by doing: ```python transformed_corpus_path = "/user/my_user/clickbait/transformed_ds/" fs = PyArrowHDFS(host, port, user, kerb_ticket=kerb_ticket) my_datasets.save_to_disk(transformed_corpus_path, fs=fs) ``` As I have 3 datasets in the DatasetDict named `my_datasets`, the previous Python code creates the following contents in HDFS: ```sh $ hadoop fs -ls "/user/my_user/clickbait/transformed_ds/" Found 4 items -rw------- 3 my_user users 43 2021-10-19 03:08 /user/my_user/clickbait/transformed_ds/dataset_dict.json drwx------ - my_user users 0 2021-10-19 03:08 /user/my_user/clickbait/transformed_ds/test drwx------ - my_user users 0 2021-10-19 03:08 /user/my_user/clickbait/transformed_ds/train drwx------ - my_user users 0 2021-10-19 03:08 /user/my_user/clickbait/transformed_ds/validation ``` I would expect to recover on `dss` the Arrow-backed datasets I previously saved in HDFS calling the `save_to_disk` method on the `DatasetDict` object when invoking `DatasetDict.load_from_disk(...)` as described above. ## Actual results However, when trying to recover the saved datasets, I get this error: ``` ... File "/home/fperez/dev/neuromancer/neuromancer/corpus.py", line 186, in load_transformed_corpus_from_disk dss = DatasetDict.load_from_disk(transformed_corpus_path, fs, True) File "/home/fperez/anaconda3/envs/neuromancer/lib/python3.9/site-packages/datasets/dataset_dict.py", line 748, in load_from_disk dataset_dict[k] = Dataset.load_from_disk(dataset_dict_split_path, fs, keep_in_memory=keep_in_memory) File "/home/fperez/anaconda3/envs/neuromancer/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 1048, in load_from_disk fs.download(src_dataset_path, dataset_path.as_posix(), recursive=True) File "pyarrow/_hdfsio.pyx", line 438, in pyarrow._hdfsio.HadoopFileSystem.download TypeError: download() got an unexpected keyword argument 'recursive' ``` Examining the [signature of the download method in pyarrow 5.0.0](https://github.com/apache/arrow/blob/54d2bd89c99df72fa091b025452f85dd5d88e3cf/python/pyarrow/_hdfsio.pyx#L438) we can see that there's no download parameter: ```python def download(self, path, stream, buffer_size=None): with self.open(path, 'rb') as f: f.download(stream, buffer_size=buffer_size) ``` ## Environment info - `datasets` version: 1.13.3 - Platform: Linux-3.10.0-1160.15.2.el7.x86_64-x86_64-with-glibc2.33 - Python version: 3.9.7 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3114/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3114/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3111
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3111/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3111/comments
https://api.github.com/repos/huggingface/datasets/issues/3111/events
https://github.com/huggingface/datasets/issues/3111
1,030,598,983
I_kwDODunzps49bbFH
3,111
concatenate_datasets removes ClassLabel typing.
{ "login": "Dref360", "id": 8976546, "node_id": "MDQ6VXNlcjg5NzY1NDY=", "avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Dref360", "html_url": "https://github.com/Dref360", "followers_url": "https://api.github.com/users/Dref360/followers", "following_url": "https://api.github.com/users/Dref360/following{/other_user}", "gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}", "starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dref360/subscriptions", "organizations_url": "https://api.github.com/users/Dref360/orgs", "repos_url": "https://api.github.com/users/Dref360/repos", "events_url": "https://api.github.com/users/Dref360/events{/privacy}", "received_events_url": "https://api.github.com/users/Dref360/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
[ "Something like this would fix it I think: https://github.com/huggingface/datasets/compare/master...Dref360:HF-3111/concatenate_types?expand=1" ]
"2021-10-19T18:05:31"
"2021-10-21T14:50:21"
"2021-10-21T14:50:21"
CONTRIBUTOR
null
## Describe the bug When concatenating two datasets, we lose typing of ClassLabel columns. I can work on this if this is a legitimate bug, ## Steps to reproduce the bug ```python import datasets from datasets import Dataset, ClassLabel, Value, concatenate_datasets DS_LEN = 100 my_dataset = Dataset.from_dict( { "sentence": [f"{chr(i % 10)}" for i in range(DS_LEN)], "label": [i % 2 for i in range(DS_LEN)] } ) my_predictions = Dataset.from_dict( { "pred": [(i + 1) % 2 for i in range(DS_LEN)] } ) my_dataset = my_dataset.cast(datasets.Features({"sentence": Value("string"), "label": ClassLabel(2, names=["POS", "NEG"])})) print("Original") print(my_dataset) print(my_dataset.features) concat_ds = concatenate_datasets([my_dataset, my_predictions], axis=1) print("Concatenated") print(concat_ds) print(concat_ds.features) ``` ## Expected results The features of `concat_ds` should contain ClassLabel. ## Actual results On master, I get: ``` {'sentence': Value(dtype='string', id=None), 'label': Value(dtype='int64', id=None), 'pred': Value(dtype='int64', id=None)} ``` ## Environment info - `datasets` version: 1.14.1.dev0 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.11 - PyArrow version: 4.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3111/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3111/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3110
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3110/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3110/comments
https://api.github.com/repos/huggingface/datasets/issues/3110/events
https://github.com/huggingface/datasets/pull/3110
1,030,558,484
PR_kwDODunzps4tZakS
3,110
Stream TAR-based dataset using iter_archive
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-10-19T17:16:24"
"2021-11-05T17:48:49"
"2021-11-05T17:48:48"
MEMBER
null
I converted all the dataset based on TAR archive to use iter_archive instead, so that they can be streamable. It means that around 80 datasets become streamable :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3110/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3110/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3110", "html_url": "https://github.com/huggingface/datasets/pull/3110", "diff_url": "https://github.com/huggingface/datasets/pull/3110.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3110.patch", "merged_at": "2021-11-05T17:48:48" }
true